163 research outputs found

    Co-Design of Arbitrated Network Control Systems with Overrun Strategies

    Get PDF
    This paper addresses co-design of platform and control of multiple control applications in a network control system. Limited and shared resources among control and noncontrol applications introduce delays in transmitted messages. These delays in turn can degrade system performance and cause instabilities. In this paper, we propose an overrun framework together with a co-design to achieve both optimal control performance and efficient resource utilization. The starting point for this framework is an Arbitrated Network Control System (ANCS) approach, where flexibility and transparency in the network are utilized to arbitrate control messages. Using a two-parameter model for delays experienced by control messages that classifies them as nominal, medium, and large, we propose a controller that switches between nominal, skip and abort strategies. An automata-theoretic technique is introduced to derive analytical bounds on the abort and skip rates. A co-design algorithm is proposed to optimize the selection of the overrun parameters. A case study is presented that demonstrates the ANCS approach, the overrun framework and the overall co-design

    A Delay-Aware Cyber-Physical Architecture for Wide-Area Control of Power Systems

    Get PDF
    In this paper we address the problem of widearea control of power systems in presence of different classes of network delays. We pose the control objective as an LQR minimization of the electro-mechanical states of the swing equations, and exploit flexibilities and transparencies of the communication network such as scheduling policies, bandwidth to co-design a delay-aware state feedback control law. Hence, unlike the traditional robust control designs, our design is delayaware, not delay-tolerant. A key feature of our method is to retain the samples of the control input until a desired time instant using shapers before releasing them for actuation to regulate the delays entering the controller. In addition, our codesign includes an overrun management strategy to guarantee stability of the closed-loop power system model in case of occasional PMU data losses. This strategy allows dropping messages with very large delays, reducing resource utilization during busy network times, and improving overall performance of the system. We illustrate our results using a 50-bus, 14- generator, 4-area power system model, and show how the proposed arbitrated controller can guarantee significantly better closed-loop performance than traditional robust controllers.NSF Grant No. ECCS-113581

    Composition and synchronization of real-time components upon one processor

    Get PDF
    Many industrial systems have various hardware and software functions for controlling mechanics. If these functions act independently, as they do in legacy situations, their overall performance is not optimal. There is a trend towards optimizing the overall system performance and creating a synergy between the different functions in a system, which is achieved by replacing more and more dedicated, single-function hardware by software components running on programmable platforms. This increases the re-usability of the functions, but their synergy requires also that (parts of) the multiple software functions share the same embedded platform. In this work, we look at the composition of inter-dependent software functions on a shared platform from a timing perspective. We consider platforms comprised of one preemptive processor resource and, optionally, multiple non-preemptive resources. Each function is implemented by a set of tasks; the group of tasks of a function that executes on the same processor, along with its scheduler, is called a component. The tasks of a component typically have hard timing constraints. Fulfilling these timing constraints of a component requires analysis. Looking at a single function, co-operative scheduling of the tasks within a component has already proven to be a powerful tool to make the implementation of a function more predictable. For example, co-operative scheduling can accelerate the execution of a task (making it easier to satisfy timing constraints), it can reduce the cost of arbitrary preemptions (leading to more realistic execution-time estimates) and it can guarantee access to other resources without the need for arbitration by other protocols. Since timeliness is an important functional requirement, (re-)use of a component for composition and integration on a platform must deal with timing. To enable us to analyze and specify the timing requirements of a particular component in isolation from other components, we reserve and enforce the availability of all its specified resources during run-time. The real-time systems community has proposed hierarchical scheduling frameworks (HSFs) to implement this isolation between components. After admitting a component on a shared platform, a component in an HSF keeps meeting its timing constraints as long as it behaves as specified. If it violates its specification, it may be penalized, but other components are temporally isolated from the malignant effects. A component in an HSF is said to execute on a virtual platform with a dedicated processor at a speed proportional to its reserved processor supply. Three effects disturb this point of view. Firstly, processor time is supplied discontinuously. Secondly, the actual processor is faster. Thirdly, the HSF no longer guarantees the isolation of an individual component when two arbitrary components violate their specification during access to non-preemptive resources, even when access is arbitrated via well-defined real-time protocols. The scientific contributions of this work focus on these three issues. Our solutions to these issues cover the system design from component requirements to run-time allocation. Firstly, we present a novel scheduling method that enables us to integrate the component into an HSF. It guarantees that each integrated component executes its tasks exactly in the same order regardless of a continuous or a discontinuous supply of processor time. Using our method, the component executes on a virtual platform and it only experiences that the processor speed is different from the actual processor speed. As a result, we can focus on the traditional scheduling problem of meeting deadline constraints of tasks on a uni-processor platform. For such platforms, we show how scheduling tasks co-operatively within a component helps to meet the deadlines of this component. We compare the strength of these cooperative scheduling techniques to theoretically optimal schedulers. Secondly, we standardize the way of computing the resource requirements of a component, even in the presence of non-preemptive resources. We can therefore apply the same timing analysis to the components in an HSF as to the tasks inside, regardless of their scheduling or their protocol being used for non-preemptive resources. This increases the re-usability of the timing analysis of components. We also make non-preemptive resources transparent during the development cycle of a component, i.e., the developer of a component can be unaware of the actual protocol being used in an HSF. Components can therefore be unaware that access to non-preemptive resources requires arbitration. Finally, we complement the existing real-time protocols for arbitrating access to non-preemptive resources with mechanisms to confine temporal faults to those components in the HSF that share the same non-preemptive resources. We compare the overheads of sharing non-preemptive resources between components with and without mechanisms for confinement of temporal faults. We do this by means of experiments within an HSF-enabled real-time operating system

    DMAC: Deadline-Miss-Aware Control

    Get PDF
    The real-time implementation of periodic controllers requires solving a co-design problem, in which the choice of the controller sampling period is a crucial element. Classic design techniques limit the period exploration to safe values, that guarantee the correct execution of the controller alongside the remaining real-time load, i.e., ensuring that the controller worst-case response time does not exceed its deadline. This paper presents DMAC: the first formally-grounded controller design strategy that explores shorter periods, thus explicitly taking into account the possibility of missing deadlines. The design leverages information about the probability that specific sub-sequences of deadline misses are experienced. The result is a fixed controller that on average works as the ideal clairvoyant time-varying controller that knows future deadline hits and misses. We obtain a safe estimate of the hit and miss events using the scenario theory, that allows us to provide probabilistic guarantees. The paper analyzes controllers implemented using the Logical Execution Time paradigm and three different strategies to handle deadline miss events: killing the job, letting the job continue but skipping the next activation, and letting the job continue using a limited queue of jobs. Experimental results show that our design proposal - i.e., exploring the space where deadlines can be missed and handled with different strategies - greatly outperforms classical control design techniques

    Analysis of Embedded Controllers Subject to Computational Overruns

    Get PDF
    Microcontrollers have become an integral part of modern everyday embedded systems, such as smart bikes, cars, and drones. Typically, microcontrollers operate under real-time constraints, which require the timely execution of programs on the resource-constrained hardware. As embedded systems are becoming increasingly more complex, microcontrollers run the risk of violating their timing constraints, i.e., overrunning the program deadlines. Breaking these constraints can cause severe damage to both the embedded system and the humans interacting with the device. Therefore, it is crucial to analyse embedded systems properly to ensure that they do not pose any significant danger if the microcontroller overruns a few deadlines.However, there are very few tools available for assessing the safety and performance of embedded control systems when considering the implementation of the microcontroller. This thesis aims to fill this gap in the literature by presenting five papers on the analysis of embedded controllers subject to computational overruns. Details about the real-time operating system's implementation are included into the analysis, such as what happens to the controller's internal state representation when the timing constraints are violated. The contribution includes theoretical and computational tools for analysing the embedded system's stability, performance, and real-time properties.The embedded controller is analysed under three different types of timing violations: blackout events (when no control computation is completed during long periods), weakly-hard constraints (when the number of deadline overruns is constrained over a window), and stochastic overruns (when violations of timing constraints are governed by a probabilistic process). These scenarios are combined with different implementation policies to reduce the gap between the analysis and its practical applicability. The analyses are further validated with a comprehensive experimental campaign performed on both a set of physical processes and multiple simulations.In conclusion, the findings of this thesis reveal that the effect deadline overruns have on the embedded system heavily depends the implementation details and the system's dynamics. Additionally, the stability analysis of embedded controllers subject to deadline overruns is typically conservative, implying that additional insights can be gained by also analysing the system's performance

    Foundations of Infrastructure CPS

    Get PDF
    Infrastructures have been around as long as urban centers, supporting a society’s needs for its planning, operation, and safety. As we move deeper into the 21st century, these infrastructures are becoming smart – they monitor themselves, communicate, and most importantly self-govern, which we denote as Infrastructure CPS. Cyber-physical systems are now becoming increasingly prevalent and possibly even mainstream. With the basics of CPS in place, such as stability, robustness, and reliability properties at a systems level, and hybrid, switched, and eventtriggered properties at a network level, we believe that the time is right to go to the next step, Infrastructure CPS, which forms the focus of the proposed tutorial. We discuss three different foundations, (i) Human Empowerment, (ii) Transactive Control, and (iii) Resilience. This will be followed by two examples, one on the nexus between power and communication infrastructure, and the other between natural gas and electricity, both of which have been investigated extensively of late, and are emerging to be apt illustrations of Infrastructure CPS

    Beyond the Weakly Hard Model: Measuring the Performance Cost of Deadline Misses

    Get PDF
    Most works in schedulability analysis theory are based on the assumption that constraints on the performance of the application can be expressed by a very limited set of timing constraints (often simply hard deadlines) on a task model. This model is insufficient to represent a large number of systems in which deadlines can be missed, or in which late task responses affect the performance, but not the correctness of the application. For systems with a possible temporary overload, models like the m-K deadline have been proposed in the past. However, the m-K model has several limitations since it does not consider the state of the system and is largely unaware of the way in which the performance is affected by deadline misses (except for critical failures). In this paper, we present a state-based representation of the evolution of a system with respect to each deadline hit or miss event. Our representation is much more general (while hopefully concise enough) to represent the evolution in time of the performance of time-sensitive systems with possible time overloads. We provide the theoretical foundations for our model and also show an application to a simple system to give examples of the state representations and their use

    Timing Predictability in Future Multi-Core Avionics Systems

    Full text link

    Review 3: Community engagement for health via coalitions, collaborations and partnerships (on-line social media and social networks) – a systematic review and meta-analysis

    Get PDF
    BACKGROUND: This report describes the methods and findings of a systematic review on community engagement (CE) for health via online social media and social networks. It is the third and final review of a programme of work on the use and effectiveness of CE in interventions that target health outcomes. Social networks are one of many forms of CE. Our first two reviews suggested that the extent and particular processes of CE may be linked to effects on people’s health. The emergence of online, electronic peer-to-peer social network sites (e.g. Facebook) and online social media tools (e.g. Twitter) have increased exponentially in recent years, and existing evidence on their effectiveness is ambiguous. AIMS: We aim to evaluate the effectiveness of online social media/social networks on: the extent of CE across designs, delivery and evaluation; the types of health issues and populations that have been studied; their effectiveness in improving health and wellbeing and reducing health inequalities; and any particular features that account for heterogeneity in effect size estimates across studies. METHODS: Systematic review methods were applied to comprehensively locate and assess the available research evidence. The search strategy employed previously run searches used for Reviews 1 and 2 of this project (described elsewhere). The included studies were descriptively analysed and the findings were synthesised using three components: framework synthesis, meta-analysis and qualitative component analysis (QCA). RESULTS: A total of 11 studies were included in the review, none of which was set in the UK. The community was not explicitly involved in identifying the health need for any of the 11 studies. No studies demonstrated a high level of CE, where participants were involved in the three measured elements: design, delivery and evaluation. Framework analysis indicated that peer delivery of the intervention was the predominant type of CE. Two processes of CE were reported – bidirectional communication and the use of facilitators – but none of the studies evaluated these processes. Professional facilitators were used more often in healthy eating/physical activity studies. Peer facilitators were used more often in youth-focused interventions and professional facilitators were utilised more frequently for interventions targeting older populations. Studies focusing on women only may incorporate peer or professional facilitators to aid intervention delivery. Peer or professional facilitators were used slightly more consistently in interventions targeting minority ethnic groups. Meta-analyses and meta-regression showed no evidence of beneficial effects on any outcomes. There was moderate (I 2 = 25≤50) to high (I 2 = ≥50) heterogeneity between studies for primary outcomes, suggesting the existence of potential moderators. None of the tested study characteristics explained the variation in effect sizes. The QCA demonstrated that including a facilitator in online social media/social networking interventions showed higher effect sizes for studies that focused on topics other than healthy eating and physical activity. CONCLUSIONS: The results from this study suggest that CE is not utilised across the design or evaluation of health interventions, and the type of CE undertaken with intervention delivery focuses on peer interactions alone. This suggests that there is very little co-creation of knowledge or building of social capital occurring in evaluated health intervention studies using online social media/networking
    • …
    corecore