343 research outputs found
Contention-resolving model predictive control for coupled control systems with shared resources
Priority-based scheduling strategies are often used to resolve contentions in resource constrained control systems. Such scheduling strategies inevitably introduce time delays into controls, which may degrade the performance or sabotage the stability of control systems. Considering the coupling between priority assignment and control, this thesis presents a novel method to co-design priority assignments and control laws for each control system, which aims to minimize the overall performance degradation caused by contentions. The co-design problem is formulated as a mixed integer optimization problem with a very large search space, rendering difficulty in computing the optimal solution. To solve the problem, we develop a contention-resolving model predictive control method to dynamically assign priorities and compute an optimal control. The priority assignment can be generated using a sample-based approach without excessive demand on computing resources, and all possible priority combinations can be presented by a decision tree. We present sufficient and necessary conditions to test the schedulabilty of the generated priorities assignments when constructing the decision tree, which guarantee that the priority assignments in the decision tree always lead to feasible solutions. The optimal controls can then be computed iteratively following the order of the generated feasible priorities. The optimal priority assignment and control design can be determined by searching the lowest cost path in the decision tree. With the fundamental assumptions required in real-time scheduling, the solution computed by the contention-resolving model predictive control is proved to be globally optimal. The effectiveness of the presented method is verified through simulation in three real-world applications, which are networked control systems, traffic intersection management systems, and human-robot collaboration systems. The performance of our method is compared with the well-known and most commonly used scheduling methods in these applications and demonstrate significant improvements using our method.Ph.D
Working Notes from the 1992 AAAI Spring Symposium on Practical Approaches to Scheduling and Planning
The symposium presented issues involved in the development of scheduling systems that can deal with resource and time limitations. To qualify, a system must be implemented and tested to some degree on non-trivial problems (ideally, on real-world problems). However, a system need not be fully deployed to qualify. Systems that schedule actions in terms of metric time constraints typically represent and reason about an external numeric clock or calendar and can be contrasted with those systems that represent time purely symbolically. The following topics are discussed: integrating planning and scheduling; integrating symbolic goals and numerical utilities; managing uncertainty; incremental rescheduling; managing limited computation time; anytime scheduling and planning algorithms, systems; dependency analysis and schedule reuse; management of schedule and plan execution; and incorporation of discrete event techniques
Adaptive Quality of Service Control in Distributed Real-Time Embedded Systems
An increasing number of distributed real-time embedded systems face the critical challenge of providing Quality of Service (QoS) guarantees in open and unpredictable environments. For example, such systems often need to enforce CPU utilization bounds on multiple processors in order to avoid overload and meet end-to-end dead-lines, even when task execution times deviate significantly from their estimated values or change dynamically at run-time. This dissertation presents an adaptive QoS control framework which includes a set of control design methodologies to provide robust QoS assurance for systems at different scales. To demonstrate its effectiveness, we have applied the framework to the end-to-end CPU utilization control problem for a common class of distributed real-time embedded systems with end-to-end tasks. We formulate the utilization control problem as a constrained multi-input-multi-output control model. We then present a centralized control algorithm for small or medium size systems, and a decentralized control algorithm for large-scale systems. Both algorithms are designed systematically based on model predictive control theory to dynamically enforce desired utilizations. We also introduce novel task allocation algorithms to ensure that the system is controllable and feasible for utilization control. Furthermore, we integrate our control algorithms with fault-tolerance mechanisms as an effective way to develop robust middleware systems, which maintain both system reliability and real-time performance even when the system is in face of malicious external resource contentions and permanent processor failures. Both control analysis and extensive experiments demonstrate that our control algorithms and middleware systems can achieve robust utilization guarantees. The control framework has also been successfully applied to other distributed real-time applications such as end-to-end delay control in real-time image transmission. Our results show that adaptive QoS control middleware is a step towards self-managing, self-healing and self-tuning distributed computing platform
Intelligent Medium Access Control Protocols for Wireless Sensor Networks
The main contribution of this thesis is to present the design and evaluation of intelligent MAC protocols for Wireless Sensor Networks (WSNs). The objective of this research is to improve the channel utilisation of WSNs while providing flexibility and simplicity in channel access. As WSNs become an efficient tool for recognising and collecting various types of information from the physical world, sensor nodes are expected to be deployed in diverse geographical environments including volcanoes, jungles, and even rivers. Consequently, the requirements for the flexibility of deployment, the simplicity of maintenance, and system self-organisation are put into a higher level. A recently developed reinforcement learning-based MAC scheme referred as ALOHA-Q is adopted as the baseline MAC scheme in this thesis due to its intelligent collision avoidance feature, on-demand transmission strategy and relatively simple operation mechanism. Previous studies have shown that the reinforcement learning technique can considerably improve the system throughput and significantly reduce the probability of packet collisions. However, the implementation of reinforcement learning is based on assumptions about a number of critical network parameters. That impedes the usability of ALOHA-Q. To overcome the challenges in realistic scenarios, this thesis proposes numerous novel schemes and techniques. Two types of frame size evaluation schemes are designed to deal with the uncertainty of node population in single-hop systems, and the unpredictability of radio interference and node distribution in multi-hop systems. A slot swapping techniques is developed to solve the hidden node issue of multi-hop networks. Moreover, an intelligent frame adaptation scheme is introduced to assist sensor nodes to achieve collision-free scheduling in cross chain networks. The combination of these individual contributions forms state of the art MAC protocols, which offers a simple, intelligent and distributed solution to improving the channel utilisation and extend the lifetime of WSNs
Recommended from our members
Design and performance optimization of asynchronous networks-on-chip
As digital systems continue to grow in complexity, the design of conventional synchronous systems is facing unprecedented challenges. The number of transistors on individual chips is already in the multi-billion range, and a greatly increasing number of components are being integrated onto a single chip. As a consequence, modern digital designs are under strong time-to-market pressure, and there is a critical need for composable design approaches for large complex systems.
In the past two decades, networks-on-chip (NoC’s) have been a highly active research area. In a NoC-based system, functional blocks are first designed individually and may run at different clock rates. These modules are then connected through a structured network for on-chip global communication. However, due to the rigidity of centrally-clocked NoC’s, there have been bottlenecks of system scalability, energy and performance, which cannot be easily solved with synchronous approaches. As a result, there has been significant recent interest in combing the notion of asynchrony with NoC designs. Since the NoC approach inherently separates the communication infrastructure, and its timing, from computational elements, it is a natural match for an asynchronous paradigm. Asynchronous NoC’s, therefore, enable a modular and extensible system composition for an ‘object-orient’ design style.
The thesis aims to significantly advance the state-of-art and viability of asynchronous and globally-asynchronous locally-synchronous (GALS) networks-on-chip, to enable high-performance and low-energy systems. The proposed asynchronous NoC’s are nearly entirely based on standard cells, which eases their integration into industrial design flows. The contributions are instantiated in three different directions.
First, practical acceleration techniques are proposed for optimizing the system latency, in order to break through the latency bottleneck in the memory interfaces of many on-chip parallel processors. Novel asynchronous network protocols are proposed, along with concrete NoC designs. A new concept, called ‘monitoring network’, is introduced. Monitoring networks are lightweight shadow networks used for fast-forwarding anticipated traffic information, ahead of the actual packet traffic. The routers are therefore allowed to initiate and perform arbitration and channel allocation in advance. The technique is successfully applied to two topologies which belong to two different categories – a variant mesh-of-trees (MoT) structure and a 2D-mesh topology. Considerable and stable latency improvements are observed across a wide range of traffic patterns, along with moderate throughput gains.
Second, for the first time, a high-performance and low-power asynchronous NoC router is compared directly to a leading commercial synchronous counterpart in an advanced industrial technology. The asynchronous router design shows significant performance improvements, as well as area and power savings. The proposed asynchronous router integrates several advanced techniques, including a low-latency circular FIFO for buffer design, and a novel end-to-end credit-based virtual channel (VC) flow control. In addition, a semi-automated design flow is created, which uses portions of a standard synchronous tool flow.
Finally, a high-performance multi-resource asynchronous arbiter design is developed. This small but important component can be directly used in existing asynchronous NoC’s for performance optimization. In addition, this standalone design promises use in opening up new NoC directions, as well as for general use in parallel systems. In the proposed arbiter design, the allocation of a resource to a client is divided into several steps. Multiple successive client-resource pairs can be selected rapidly in pipelined sequence, and the completion of the assignments can overlap in parallel.
In sum, the thesis provides a set of advanced design solutions for performance optimization of asynchronous and GALS networks-on-chip. These solutions are at different levels, from network protocols, down to router- and component-level optimizations, which can be directly applied to existing basic asynchronous NoC designs to provide a leap in performance improvement
The 1990 progress report and future plans
This document describes the progress and plans of the Artificial Intelligence Research Branch (RIA) at ARC in 1990. Activities span a range from basic scientific research to engineering development and to fielded NASA applications, particularly those applications that are enabled by basic research carried out at RIA. Work is conducted in-house and through collaborative partners in academia and industry. Our major focus is on a limited number of research themes with a dual commitment to technical excellence and proven applicability to NASA short, medium, and long-term problems. RIA acts as the Agency's lead organization for research aspects of artificial intelligence, working closely with a second research laboratory at JPL and AI applications groups at all NASA centers
Progressive introduction of network softwarization in operational telecom networks: advances at architectural, service and transport levels
Technological paradigms such as Software Defined Networking, Network Function
Virtualization and Network Slicing are altogether offering new ways of providing services.
This process is widely known as Network Softwarization, where traditional operational
networks adopt capabilities and mechanisms inherit form the computing world, such as
programmability, virtualization and multi-tenancy.
This adoption brings a number of challenges, both from the technological and operational
perspectives. On the other hand, they provide an unprecedented flexibility opening
opportunities to developing new services and new ways of exploiting and consuming telecom
networks.
This Thesis first overviews the implications of the progressive introduction of network
softwarization in operational networks for later on detail some advances at different levels,
namely architectural, service and transport levels. It is done through specific exemplary use
cases and evolution scenarios, with the goal of illustrating both new possibilities and existing
gaps for the ongoing transition towards an advanced future mode of operation.
This is performed from the perspective of a telecom operator, paying special attention on
how to integrate all these paradigms into operational networks for assisting on their evolution
targeting new, more sophisticated service demands.Programa de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Eduardo Juan Jacob Taquet.- Secretario: Francisco Valera Pintor.- Vocal: Jorge López Vizcaín
Third International Symposium on Artificial Intelligence, Robotics, and Automation for Space 1994
The Third International Symposium on Artificial Intelligence, Robotics, and Automation for Space (i-SAIRAS 94), held October 18-20, 1994, in Pasadena, California, was jointly sponsored by NASA, ESA, and Japan's National Space Development Agency, and was hosted by the Jet Propulsion Laboratory (JPL) of the California Institute of Technology. i-SAIRAS 94 featured presentations covering a variety of technical and programmatic topics, ranging from underlying basic technology to specific applications of artificial intelligence and robotics to space missions. i-SAIRAS 94 featured a special workshop on planning and scheduling and provided scientists, engineers, and managers with the opportunity to exchange theoretical ideas, practical results, and program plans in such areas as space mission control, space vehicle processing, data analysis, autonomous spacecraft, space robots and rovers, satellite servicing, and intelligent instruments
Cost functions in optical burst-switched networks
Optical Burst Switching (OBS) is a new paradigm for an all-optical Internet. It combines the best features of Optical Circuit Switching (OCS) and Optical Packet Switching (OPS) while avoidmg the mam problems associated with those networks .Namely, it offers good granularity, but its hardware requirements are lower than those of OPS.
In a backbone network, low loss ratio is of particular importance. Also, to meet varying user requirements, it should support multiple classes of service. In Optical Burst-Switched networks both these goals are closely related to the way bursts are arranged in channels. Unlike the case of circuit switching, scheduling decisions affect the loss probability of future burst
This thesis proposes the idea of a cost function. The cost function is used to judge the quality of a burst arrangement and estimate the probability that this burst will interfere with future bursts. Two applications of the cost functio n are proposed. A scheduling algorithm uses the value of the cost function to optimize the alignment of the new burst with other bursts in a channel, thus minimising the loss ratio. A cost-based burst droppmg algorithm, that can be used as a part of a Quality of Service scheme, drops only those bursts, for which the cost function value indicates that are most likely to cause a contention. Simulation results, performed using a custom-made OBS extension to the ns-2 simulator, show that the cost-based algorithms improve network performanc
- …