13,861 research outputs found

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Scheduling policies and system software architectures for mixed-criticality computing

    Get PDF
    Mixed-criticality model of computation is being increasingly adopted in timing-sensitive systems. The model not only ensures that the most critical tasks in a system never fails, but also aims for better systems resource utilization in normal condition. In this report, we describe the widely used mixed-criticality task model and fixed-priority scheduling algorithms for the model in uniprocessors. Because of the necessity by the mixed-criticality task model and scheduling policies, isolation, both temporal and spatial, among tasks is one of the main requirements from the system design point of view. Different virtualization techniques have been used to design system software architecture with the goal of isolation. We discuss such a few system software architectures which are being and can be used for mixed-criticality model of computation

    Regulating Search Warrant Execution Procedure for Stored Electronic Communications

    Get PDF
    Electronic communication services, from email, to social media, tomessaging applications, have not only dramatically changed daily life but have also had a profound impact on criminal investigations and procedure.The often large volume of electronically stored information has led to a two-step process for search warrant execution, codified in Federal Criminal Procedure Rule 41. When conducting a search pursuant to Rule 41, law enforcement often retains both responsive items—materials that fall within the scope of the warrant—and nonresponsive materials—intermingled items that can be searched, but ultimately exceed the scope of the warrant. This possession of nonresponsive material creates a tension between the account holder’s privacy interests and the government’s ability to conduct an effective search.Courts and scholars have implemented and proposed a range of approaches for search warrant execution inlight of concerns about sweeping general searches and the practicalities of searching electronically stored information. This Note examines these approaches to regulate search warrant execution procedure in the context of stored electronic communications. This Note also discusses the strengths and shortcomings of these various mechanisms and concludes that Rule 41 should be amended to provide standards for the retention and use of nonresponsive material

    Challenges Using Linux as a Real-Time Operating System

    Get PDF
    Human-in-the-loop (HITL) simulation groups at NASA and the Air Force Research Lab have been using Linux as a real-time operating system (RTOS) for over a decade. More recently, SpaceX has revealed that it is using Linux as an RTOS for its Falcon launch vehicles and Dragon capsules. As Linux makes its way from ground facilities to flight critical systems, it is necessary to recognize that the real-time capabilities in Linux are cobbled onto a kernel architecture designed for general purpose computing. The Linux kernel contain numerous design decisions that favor throughput over determinism and latency. These decisions often require workarounds in the application or customization of the kernel to restore a high probability that Linux will achieve deadlines

    Context-aware adaptation in DySCAS

    Get PDF
    DySCAS is a dynamically self-configuring middleware for automotive control systems. The addition of autonomic, context-aware dynamic configuration to automotive control systems brings a potential for a wide range of benefits in terms of robustness, flexibility, upgrading etc. However, the automotive systems represent a particularly challenging domain for the deployment of autonomics concepts, having a combination of real-time performance constraints, severe resource limitations, safety-critical aspects and cost pressures. For these reasons current systems are statically configured. This paper describes the dynamic run-time configuration aspects of DySCAS and focuses on the extent to which context-aware adaptation has been achieved in DySCAS, and the ways in which the various design and implementation challenges are met

    The Market-based Lead Phasedown

    Get PDF
    The U.S. lead phasedown was effective in meeting its environmental objectives, and did so more quickly with the allowance of permit banking. The marketable lead permit system was highly costeffective, saving hundreds of millions of dollars relative to comparable policies not allowing trading or banking. Estimates suggest that transaction costs brought about only a modest reduction in program efficiency. The market-based nature of the program also provided incentives for more efficient adoption of new lead-removing technology, relative to a uniform standard. Distributionally, it is likely that the program was actually more responsive to the cost concerns of small refiners than a similar uniform standard would have been. The flexibility of the program likely increased the amount of violations, however, and added an unexpected monitoring and enforcement burden. On the other hand, one of the efficiency advantages of the incentive-based program is that it provided opportunities for unanticipated means of cost-effective compliance.lead phasedown, gasoline, tradable permit, market-based policy, technology adoption

    Flexible Scheduling in Middleware for Distributed rate-based real-time applications - Doctoral Dissertation, May 2002

    Get PDF
    Distributed rate-based real-time systems, such as process control and avionics mission computing systems, have traditionally been scheduled statically. Static scheduling provides assurance of schedulability prior to run-time overhead. However, static scheduling is brittle in the face of unanticipated overload, and treats invocation-to-invocation variations in resource requirements inflexibly. As a consequence, processing resources are often under-utilized in the average case, and the resulting systems are hard to adapt to meet new real-time processing requirements. Dynamic scheduling offers relief from the limitations of static scheduling. However, dynamic scheduling offers relief from the limitations of static scheduling. However, dynamic scheduling often has a high run-time cost because certain decisions are enforced on-line. Furthermore, under conditions of overload tasks can be scheduled dynamically that may never be dispatched, or that upon dispatch would miss their deadlines. We review the implications of these factors on rate-based distributed systems, and posits the necessity to combine static and dynamic approaches to exploit the strengths and compensate for the weakness of either approach in isolation. We present a general hybrid approach to real-time scheduling and dispatching in middleware, that can employ both static and dynamic components. This approach provides (1) feasibility assurance for the most critical tasks, (2) the ability to extend this assurance incrementally to operations in successively lower criticality equivalence classes, (3) the ability to trade off bounds on feasible utilization and dispatching over-head in cases where, for example, execution jitter is a factor or rates are not harmonically related, and (4) overall flexibility to make more optimal use of scarce computing resources and to enforce a wider range of application-specified execution requirements. This approach also meets additional constraints of an increasingly important class of rate-based systems, those with requirements for robust management of real-time performance in the face of rapidly and widely changing operating conditions. To support these requirements, we present a middleware framework that implements the hybrid scheduling and dispatching approach described above, and also provides support for (1) adaptive re-scheduling of operations at run-time and (2) reflective alternation among several scheduling strategies to improve real-time performance in the face of changing operating conditions. Adaptive re-scheduling must be performed whenever operating conditions exceed the ability of the scheduling and dispatching infrastructure to meet the critical real-time requirements of the system under the currently specified rates and execution times of operations. Adaptive re-scheduling relies on the ability to change the rates of execution of at least some operations, and may occur under the control of a higher-level middleware resource manager. Different rates of execution may be specified under different operating conditions, and the number of such possible combinations may be arbitrarily large. Furthermore, adaptive rescheduling may in turn require notification of rate-sensitive application components. It is therefore desirable to handle variations in operating conditions entirely within the scheduling and dispatching infrastructure when possible. A rate-based distributed real-time application, or a higher-level resource manager, could thus fall back on adaptive re-scheduling only when it cannot achieve acceptable real-time performance through self-adaptation. Reflective alternation among scheduling heuristics offers a way to tune real-time performance internally, and we offer foundational support for this approach. In particular, run-time observable information such as that provided by our metrics-feedback framework makes it possible to detect that a given current scheduling heuristic is underperforming the level of service another could provide. Furthermore we present empirical results for our framework in a realistic avionics mission computing environment. This forms the basis for guided adaption. This dissertation makes five contributions in support of flexible and adaptive scheduling and dispatching in middleware. First, we provide a middle scheduling framework that supports arbitrary and fine-grained composition of static/dynamic scheduling, to assure critical timeliness constraints while improving noncritical performance under a range of conditions. Second, we provide a flexible dispatching infrastructure framework composed of fine-grained primitives, and describe how appropriate configurations can be generated automatically based on the output of the scheduling framework. Third, we describe algorithms to reduce the overhead and duration of adaptive rescheduling, based on sorting for rate selection and priority assignment. Fourth, we provide timely and efficient performance information through an optimized metrics-feedback framework, to support higher-level reflection and adaptation decisions. Fifth, we present the results of empirical studies to quantify and evaluate the performance of alternative canonical scheduling heuristics, across a range of load and load jitter conditions. These studies were conducted within an avionics mission computing applications framework running on realistic middleware and embedded hardware. The results obtained from these studies (1) demonstrate the potential benefits of reflective alternation among distinct scheduling heuristics at run-time, and (2) suggest performance factors of interest for future work on adaptive control policies and mechanisms using this framework

    Considerations in development of expert systems for real-time space applications

    Get PDF
    Over the years, demand on space systems has increased tremendously and this trend will continue for the near future. Enhanced capabilities of space systems, however, can only be met with increased complexity and sophistication of onboard and ground systems. Artificial Intelligence and expert system techniques have great potential in space applications. Expert systems could facilitate autonomous decision making, improve in-orbit fault diagnosis and repair, enhance performance and reduce reliance on ground support. However, real-time expert systems, unlike conventional off-line consultative systems, have to satisfy certain special stringent requirements before they could be used for onboard space applications. Challenging and interesting new environments are faced while developing expert system space applications. This paper discusses the special characteristics, requirements and typical life cycle issues for onboard expert systems. Further, it also describes considerations in design, development, and implementation which are particularly important to real-time expert systems for space applications
    • …
    corecore