110,598 research outputs found

    Adaptive Resource Management in Asynchronous Real-time Distributed Systems Using Feedback Control Functions

    Full text link
    Presents feedback control techniques for performing adaptive resource management in asynchronous real-time distributed systems. Such systems are characterized by significant execution time uncertainties in the application environment and system resource state. Thus, such systems require adaptive resource management that dynamically monitor the system for adherence to the desired real-time requirements and perform run-time adaptation of the application to changing workloads when unacceptable timeliness behavior is observed. We propose adaptive resource management techniques that are based on feedback control theory. The controllers solve resource allocation problems that arise during run-time adaptation using the classical proportional-integral-derivative (PID) control functions. We study the performance of the controllers through simulation. The simulation results indicate that the controllers produce low missed deadline ratios and resource utilizations during situations of high workloads

    Intelligent Feedback Control-based Adaptive Resource Management for Asynchronous, Decentralized Real-time Systems

    Full text link
    Presents intelligent feedback control techniques for adaptive resource management in asynchronous, decentralized real-time systems. We propose adaptive resource management techniques that are based on feedback control theory and are designed using the intelligent control design paradigm. The controllers solve resource allocation problems that arise during run-time adaptation using the classic proportional-integral-derivative (PID) control functions and fuzzy logic. We study the performance of the controllers through simulation. The simulation results indicate that the controllers produce low missed deadline ratios and resource utilizations during high-workload situations

    CPU Resource Management and Noise Filtering for PID Control

    Get PDF
    The first part of the thesis deals with adaptive CPU resource management for multicore platforms. The work was done as a part of the resource manager component of the adaptive resource management framework implemented in the European ACTORS project. The framework dynamically allocates CPU resources for the applications. The key element of the framework is the resource manager that combines feedforward and feedback algorithms together with reservation techniques. The resource requirements of the applications are provided through service level tables. Dynamic bandwidth allocation is performed by the resource manager which adapts applications to changes in resource availability, and adapts the resource allocation to changes in application requirements. The dynamic bandwidth allocation allows to obtain real application models through the tuning and update of the initial service level tables. The second part of the thesis deals with the design of measurement noise filters for PID control. The design is based on an iterative approach to calculate the filter time constant, which requires the information in terms of an FOTD model of the process. Tuning methods such as Lambda, SIMC, and AMIGO are used to obtain the controller parameters. New criteria based on the trade-offs between performance, robustness, and attenuation of measurement noise are proposed for assessment of the design. Simple rules for calculating the filter time constant based on the nominal process model and the nominal controller are then derived, thus, eliminating the need for iteration. Finally, a complete tuning procedure is proposed. The tuning procedure accounts for the effects of filtering in the nominal process. Hence, the added dynamics are included in the filtered process model, which is then used to recalculate the controller tuning parameters

    Learning algorithms for the control of routing in integrated service communication networks

    Get PDF
    There is a high degree of uncertainty regarding the nature of traffic on future integrated service networks. This uncertainty motivates the use of adaptive resource allocation policies that can take advantage of the statistical fluctuations in the traffic demands. The adaptive control mechanisms must be 'lightweight', in terms of their overheads, and scale to potentially large networks with many traffic flows. Adaptive routing is one form of adaptive resource allocation, and this thesis considers the application of Stochastic Learning Automata (SLA) for distributed, lightweight adaptive routing in future integrated service communication networks. The thesis begins with a broad critical review of the use of Artificial Intelligence (AI) techniques applied to the control of communication networks. Detailed simulation models of integrated service networks are then constructed, and learning automata based routing is compared with traditional techniques on large scale networks. Learning automata are examined for the 'Quality-of-Service' (QoS) routing problem in realistic network topologies, where flows may be routed in the network subject to multiple QoS metrics, such as bandwidth and delay. It is found that learning automata based routing gives considerable blocking probability improvements over shortest path routing, despite only using local connectivity information and a simple probabilistic updating strategy. Furthermore, automata are considered for routing in more complex environments spanning issues such as multi-rate traffic, trunk reservation, routing over multiple domains, routing in high bandwidth-delay product networks and the use of learning automata as a background learning process. Automata are also examined for routing of both 'real-time' and 'non-real-time' traffics in an integrated traffic environment, where the non-real-time traffic has access to the bandwidth 'left over' by the real-time traffic. It is found that adopting learning automata for the routing of the real-time traffic may improve the performance to both real and non-real-time traffics under certain conditions. In addition, it is found that one set of learning automata may route both traffic types satisfactorily. Automata are considered for the routing of multicast connections in receiver-oriented, dynamic environments, where receivers may join and leave the multicast sessions dynamically. Automata are shown to be able to minimise the average delay or the total cost of the resulting trees using the appropriate feedback from the environment. Automata provide a distributed solution to the dynamic multicast problem, requiring purely local connectivity information and a simple updating strategy. Finally, automata are considered for the routing of multicast connections that require QoS guarantees, again in receiver-oriented dynamic environments. It is found that the distributed application of learning automata leads to considerably lower blocking probabilities than a shortest path tree approach, due to a combination of load balancing and minimum cost behaviour

    Improving Performance of Feedback-Based Real-Time Networks using Model Checking and Reinforcement Learning

    Get PDF
    Traditionally, automatic control techniques arose due to need for automation in mechanical systems. These techniques rely on robust mathematical modelling of physical systems with the goal to drive their behaviour to desired set-points. Decades of research have successfully automated, optimized, and ensured safety of a wide variety of mechanical systems. Recent advancement in digital technology has made computers pervasive into every facet of life. As such, there have been many recent attempts to incorporate control techniques into digital technology. This thesis investigates the intersection and co-application of control theory and computer science to evaluate and improve performance of time-critical systems. The thesis applies two different research areas, namely, model checking and reinforcement learning to design and evaluate two unique real-time networks in conjunction with control technologies. The first is a camera surveillance system with the goal of constrained resource allocation to self-adaptive cameras. The second is a dual-delay real-time communication network with the goal of safe packet routing with minimal delays.The camera surveillance system consists of self-adaptive cameras and a centralized manager, in which the cameras capture a stream of images and transmit them to a central manager over a shared constrained communication channel. The event-based manager allocates fractions of the shared bandwidth to all cameras in the network. The thesis provides guarantees on the behaviour of the camera surveillance network through model checking. Disturbances that arise during image capture due to variations in capture scenes are modelled using probabilistic and non-deterministic Markov Decision Processes (MDPs). The different properties of the camera network such as the number of frame drops and bandwidth reallocations are evaluated through formal verification.The second part of the thesis explores packet routing for real-time networks constructed with nodes and directed edges. Each edge in the network consists of two different delays, a worst-case delay that captures high load characteristics, and a typical delay that captures the current network load. Each node in the network takes safe routing decisions by considering delays already encountered and the amount of remaining time. The thesis applies reinforcement learning to route packets through the network with minimal delays while ensuring the total path delay from source to destination does not exceed the pre-determined deadline of the packet. The reinforcement learning algorithm explores new edges to find optimal routing paths while ensuring safety through a simple pre-processing algorithm. The thesis shows that it is possible to apply powerful reinforcement learning techniques to time-critical systems with expert knowledge about the system
    • …
    corecore