1,252 research outputs found

    Opaque analysis for resource-sharing components in hierarchical real-time systems : extended version

    Get PDF
    A real-time component may be developed under the assumption that it has the entire platform at its disposal. Composing a real-time system from independently developed components may require resource sharing between components. We propose opaque analysis methods to integrate resource-sharing components into hierarchically scheduled systems. Resource sharing imposes blocking times within an individual component and between components. An opaque local analysis ignores global blocking between components and allows to analyse an individual component while assuming that shared resources are exclusively available for a component. To arbitrate mutually exclusive resource access between components, we consider four existing protocols: SIRAP, BROE and HSRP - comprising overrun with payback (OWP) and overrun without payback (ONP). We classify local analyses for each synchronization protocol based on the notion of opacity and we develop new analysis for those protocols that are non-opaque. Finally, we compare SIRAP, ONP, OWP and BROE by means of an extensive simulation study. From the results, we derive guidelines for selecting a global synchronization protocol

    Composition and synchronization of real-time components upon one processor

    Get PDF
    Many industrial systems have various hardware and software functions for controlling mechanics. If these functions act independently, as they do in legacy situations, their overall performance is not optimal. There is a trend towards optimizing the overall system performance and creating a synergy between the different functions in a system, which is achieved by replacing more and more dedicated, single-function hardware by software components running on programmable platforms. This increases the re-usability of the functions, but their synergy requires also that (parts of) the multiple software functions share the same embedded platform. In this work, we look at the composition of inter-dependent software functions on a shared platform from a timing perspective. We consider platforms comprised of one preemptive processor resource and, optionally, multiple non-preemptive resources. Each function is implemented by a set of tasks; the group of tasks of a function that executes on the same processor, along with its scheduler, is called a component. The tasks of a component typically have hard timing constraints. Fulfilling these timing constraints of a component requires analysis. Looking at a single function, co-operative scheduling of the tasks within a component has already proven to be a powerful tool to make the implementation of a function more predictable. For example, co-operative scheduling can accelerate the execution of a task (making it easier to satisfy timing constraints), it can reduce the cost of arbitrary preemptions (leading to more realistic execution-time estimates) and it can guarantee access to other resources without the need for arbitration by other protocols. Since timeliness is an important functional requirement, (re-)use of a component for composition and integration on a platform must deal with timing. To enable us to analyze and specify the timing requirements of a particular component in isolation from other components, we reserve and enforce the availability of all its specified resources during run-time. The real-time systems community has proposed hierarchical scheduling frameworks (HSFs) to implement this isolation between components. After admitting a component on a shared platform, a component in an HSF keeps meeting its timing constraints as long as it behaves as specified. If it violates its specification, it may be penalized, but other components are temporally isolated from the malignant effects. A component in an HSF is said to execute on a virtual platform with a dedicated processor at a speed proportional to its reserved processor supply. Three effects disturb this point of view. Firstly, processor time is supplied discontinuously. Secondly, the actual processor is faster. Thirdly, the HSF no longer guarantees the isolation of an individual component when two arbitrary components violate their specification during access to non-preemptive resources, even when access is arbitrated via well-defined real-time protocols. The scientific contributions of this work focus on these three issues. Our solutions to these issues cover the system design from component requirements to run-time allocation. Firstly, we present a novel scheduling method that enables us to integrate the component into an HSF. It guarantees that each integrated component executes its tasks exactly in the same order regardless of a continuous or a discontinuous supply of processor time. Using our method, the component executes on a virtual platform and it only experiences that the processor speed is different from the actual processor speed. As a result, we can focus on the traditional scheduling problem of meeting deadline constraints of tasks on a uni-processor platform. For such platforms, we show how scheduling tasks co-operatively within a component helps to meet the deadlines of this component. We compare the strength of these cooperative scheduling techniques to theoretically optimal schedulers. Secondly, we standardize the way of computing the resource requirements of a component, even in the presence of non-preemptive resources. We can therefore apply the same timing analysis to the components in an HSF as to the tasks inside, regardless of their scheduling or their protocol being used for non-preemptive resources. This increases the re-usability of the timing analysis of components. We also make non-preemptive resources transparent during the development cycle of a component, i.e., the developer of a component can be unaware of the actual protocol being used in an HSF. Components can therefore be unaware that access to non-preemptive resources requires arbitration. Finally, we complement the existing real-time protocols for arbitrating access to non-preemptive resources with mechanisms to confine temporal faults to those components in the HSF that share the same non-preemptive resources. We compare the overheads of sharing non-preemptive resources between components with and without mechanisms for confinement of temporal faults. We do this by means of experiments within an HSF-enabled real-time operating system

    Software parametrization of feasible reconfigurable real-time systems under energy and dependency constraints

    Get PDF
    Enforcing temporal constraints is necessary to maintain the correctness of a realtime system. However, a real-time system may be enclosed by many factors and constraints that lead to different challenges to overcome. In other words, to achieve the real-time aspects, these systems face various challenges particularly in terms of architecture, reconfiguration property, energy consumption, and dependency constraints. Unfortunately, the characterization of real-time task deadlines is a relatively unexplored problem in the real-time community. Most of the literature seems to consider that the deadlines are somehow provided as hard assumptions, this can generate high costs relative to the development time if these deadlines are violated at runtime. In this context, the main aim of this thesis is to determine the effective temporal properties that will certainly be met at runtime under well-defined constraints. We went to overcome these challenges in a step-wise manner. Each time, we elected a well-defined subset of challenges to be solved. This thesis deals with reconfigurable real-time systems in mono-core and multi-core architectures. First, we propose a new scheduling strategy based on configuring feasible scheduling of software tasks of various types (periodic, sporadic, and aperiodic) and constraints (hard and soft) mono-core architecture. Then, the second contribution deals with reconfigurable real-time systems in mono-core under energy and resource sharing constraints. Finally, the main objective of the multi-core architecture is achieved in a third contribution.Das Erzwingen zeitlicher Beschränkungen ist notwendig,um die Korrektheit eines Echtzeitsystems aufrechtzuerhalten. Ein Echtzeitsystem kann jedoch von vielen Faktoren und Beschränkungen umgeben sein, die zu unterschiedlichen Herausforderungen führen, die es zu bewältigen gilt. Mit anderen Worten, um die zeitlichen Aspekte zu erreichen, können diese Systeme verschiedenen Herausforderungen gegenüberstehen, einschliesslich Architektur, Rekonfigurationseigenschaft, Energie und Abhängigkeitsbeschränkungen. Leider ist die Charakterisierung von Echtzeit-Aufgabenterminen ein relativ unerforschtes Problem in der Echtzeit-Community. Der grösste Teil der Literatur geht davon aus, dass die Fristen (Deadlines) irgendwie als harte Annahmen bereitgestellt werden, was im Verhältnis zur Entwicklungszeit hohe Kosten verursachen kann, wenn diese Fristen zur Laufzeit verletzt werden. In diesem Zusammenhang ist das Hauptziel dieser Arbeit, die effektiven zeitlichen Eigenschaften zu bestimmen, die zur Laufzeit unter wohldefinierten Randbedingungen mit Sicherheit erfüllt werden. Wir haben diese Herausforderungen schrittweise gemeistert. Jedes Mal haben wir eine wohldefinierte Teilmenge von Herausforderungen ausgewählt, die es zu lösen gilt. Zunächst schlagen wir eine neue Scheduling-Strategie vor, die auf der Konfiguration eines durchführbaren Scheduling von Software-Tasks verschiedener Typen (periodisch, sporadisch und aperiodisch) und Beschränkungen (hart und weich) einer Mono-Core-Architektur basiert. Der zweite Beitrag befasst sich dann mit rekonfigurierbaren Echtzeitsystemen in Mono-Core unter Energie und Ressourcenteilungsbeschränkungen. Abschliessend wird in einem dritten Beitrag das Verfahren auf Multi-Core-Architekturen erweitert

    Health Care Costs and the Arc of Innovation

    Get PDF
    Health care costs continue their inexorable rise, threatening America’s long-term fiscal stability, competitiveness, and standard of living. Over the past half-century, efforts to rein in spending have uniformly failed. In this Article, we explain why, breaking with standard accounts of regulatory and market dysfunction. We point instead to the nexus of economics, mutual empathy, and social expectations that drives medical innovation and locks in low-value technologies. We show how law reflects and reinforces this nexus and how and why health-policy-makers avert their gaze. Next, we propose to circumvent these barriers instead of surmounting them. Rather than targeting today’s excessive spending, we seek to leverage available legal tools to bend the arc of innovation, away from marginally-beneficial technology and toward high-value advances. To this end, we set forth a novel, value-based approach to pricing and patent protection—one that departs sharply from current practice by rewarding innovators in proportion to the therapeutic benefits new tests and treatments yield. Using cancer therapy as an example, we explain how emerging information technology and large troves of electronic clinical data are opening the way to near-real-time assessment of efficacy. We then show how such assessment can power ongoing adjustment of pricing and patent terms. Finally, we offer a blueprint for how laws governing health care payment and intellectual property can be tailored to realize this value-focused vision. For the reasons we lay out, the transformation of incentives we urge will both slow clinical spending growth and greatly enhance the social value that this spending yields

    PEER Testbed Study on a Laboratory Building: Exercising Seismic Performance Assessment

    Get PDF
    From 2002 to 2004 (years five and six of a ten-year funding cycle), the PEER Center organized the majority of its research around six testbeds. Two buildings and two bridges, a campus, and a transportation network were selected as case studies to “exercise” the PEER performance-based earthquake engineering methodology. All projects involved interdisciplinary teams of researchers, each producing data to be used by other colleagues in their research. The testbeds demonstrated that it is possible to create the data necessary to populate the PEER performancebased framing equation, linking the hazard analysis, the structural analysis, the development of damage measures, loss analysis, and decision variables. This report describes one of the building testbeds—the UC Science Building. The project was chosen to focus attention on the consequences of losses of laboratory contents, particularly downtime. The UC Science testbed evaluated the earthquake hazard and the structural performance of a well-designed recently built reinforced concrete laboratory building using the OpenSees platform. Researchers conducted shake table tests on samples of critical laboratory contents in order to develop fragility curves used to analyze the probability of losses based on equipment failure. The UC Science testbed undertook an extreme case in performance assessment—linking performance of contents to operational failure. The research shows the interdependence of building structure, systems, and contents in performance assessment, and highlights where further research is needed. The Executive Summary provides a short description of the overall testbed research program, while the main body of the report includes summary chapters from individual researchers. More extensive research reports are cited in the reference section of each chapter

    Energy aware task scheduling with task synchronization for embedded real time systems

    Get PDF

    CROSS-STACK PREDICTIVE CONTROL FRAMEWORK FOR MULTICORE REAL-TIME APPLICATIONS

    Get PDF
    Many of the next generation applications in entertainment, human computer interaction, infrastructure, security and medical systems are computationally intensive, always-on, and have soft real time (SRT) requirements. While failure to meet deadlines is not catastrophic in SRT systems, missing deadlines can result in an unacceptable degradation in the quality of service (QoS). To ensure acceptable QoS under dynamically changing operating conditions such as changes in the workload, energy availability, and thermal constraints, systems are typically designed for worst case conditions. Unfortunately, such over-designing of systems increases costs and overall power consumption. In this dissertation we formulate the real-time task execution as a Multiple-Input, Single- Output (MISO) optimal control problem involving tracking a desired system utilization set point with control inputs derived from across the computing stack. We assume that an arbitrary number of SRT tasks may join and leave the system at arbitrary times. The tasks are scheduled on multiple cores by a dynamic priority multiprocessor scheduling algorithm. We use a model predictive controller (MPC) to realize optimal control. MPCs are easy to tune, can handle multiple control variables, and constraints on both the dependent and independent variables. We experimentally demonstrate the operation of our controller on a video encoder application and a computer vision application executing on a dual socket quadcore Xeon processor with a total of 8 processing cores. We establish that the use of DVFS and application quality as control variables enables operation at a lower power op- erating point while meeting real-time constraints as compared to non cross-stack control approaches. We also evaluate the role of scheduling algorithms in the control of homo- geneous and heterogeneous workloads. Additionally, we propose a novel adaptive control technique for time-varying workloads

    Airport Evacuation Strategies for Passengers with Reduced Mobility: Simulation of Structural Configurations

    Get PDF
    Airport emergency cases are becoming more common; therefore, it becomes extremely important to have good emergency and evacuation protocols that are easily and quickly applied so the number of the affected is minimized. The simulation of these emergencies is important to implement evacuation plans and evaluate them. Evacuation plans are often idealized to passengers that in case of emergency are self-sufficient, able to physically attend themselves in their evacuation from the airport, not being optimized for passengers with reduced mobility that require assistance from others, and thus more time for evacuation. This study aims to understand and identify key issues about how passengers with reduced mobility are considered in current evacuation plans and also understand which possible solutions exist to optimize their evacuation. For that, it was performed an airport evacuation simulation using an egress simulation tool and obtained results that allow us to observe that when passengers with mobility impairments have egress routes and exits different from the other occupants, evacuation times decrease. Therefore, both groups of occupants may egress faster and through less congested doors.As situações de emergência em aeroportos são cada vez mais frequentes; sendo assim, tornase extremamente importante ter bons planos de evacuação de fácil e rápida implementação de maneira a que o número de afetados seja mínimo. A simulação destas situações de emergência é importante para implementar e avaliar planos de evacuação. Estes planos são frequentemente idealizados para passageiros que, em caso de emergência são autosuficientes, fisicamente capazes de abandonar o edifício sem ajuda de outros, não sendo otimizados para passageiros com mobilidade reduzida que requerem a assistência de outros e, portanto, mais tempo para a sua evacuação. Este estudo tem como objetivo compreender e identificar as principais questões no que diz respeito à forma como os passageiros com mobilidade reduzida são considerados nos planos de evacuação atuais e também entender que soluções possíveis existem para otimizar a sua evacuação. Para isso, foram realizadas simulações de evacuação de um terminal de aeroporto, utilizando uma ferramenta de simulação de evacuação e obtiveram-se resultados que permitem observar que, quando os passageiros com dificuldades de mobilidade têm rotas de evacuação e utilizam saídas diferentes dos outros ocupantes, os tempos de evacuação diminuem. Por conseguinte, ambos os grupos de ocupantes são evacuados mais rapidamente e por saídas menos congestionadas

    Flexible Real-Time Linux a New Environment for Flexible Hard Real-Time Systems

    Full text link
    [ES] La presente tesis propone un nuevo entorno general para la construcción de sistemas flexibles de tiempo real estricto, esto es, sistemas que necesitan de garantías de tiempo real estricto y de un comportamiento flexible. El entorno propuesto es capaz de integrar tareas con varios niveles de criticidad y diferentes paradigmas de planificación en el mismo sistema. Como resultado, el entorno permite proporcionar garantías de tiempo real estricto a las tareas críticas y además conseguir una planificación adaptativa e inteligente de las tareas menos críticas. El entorno se define en términos de un modelo de tareas, una arquitectura software y un conjunto de servicios. El modelo de tareas propone construir una aplicación flexible de tiempo real estricto como un conjunto de tareas, donde cada tarea se estructura en una secuencia de componentes obligatorios y opcionales. La arquitectura software propone separar la ejecución de las tareas en dos niveles de planificación interrelacionados, de manera que un nivel planifica los componentes obligatorios mediante una política de planificación de tiempo real estricto mientras que el otro nivel planifica los componentes opcionales mediante una política de planificación basada en la utilidad. El conjunto de servicios incluye, por una parte, un sistema de comunicación entre los componentes de las tareas (tanto obligatorios como opcionales) y, por otra, una serie de mecanismos para la detección y tratamiento de excepciones temporales producidas en ejecución. Por otra parte, la presente tesis muestra que el entorno teórico propuesto puede ser implementado realmente. En concreto, se presenta el diseño e implementación de un sistema de ejecución (es decir, un núcleo de sistema operativo) capaz de soportar las características de dicho entorno. Este sistema, denominado Flexible Real-Time Linux (FRTL), ha sido desarrollado a partir de un núcleo mínimo existente denominado Real-Time Linux (RT-Linux). Finalmente, esta tesis presenta una caracterización temporal completa del sistema FRTL y medidas reales de su sobrecarga. La caracterización temporal ha permitido el desarrollo de un test de garantía completo de todo el sistema (incluyendo la aplicación y el núcleo de FRTL), que puede ser utilizado para verificar las restricciones temporales de cualquier aplicación implementada sobre FRTL. Por su parte, las medidas de la sobrecarga de FRTL muestran que este núcleo ha sido diseñado e implementado de manera eficiente. En conjunto, se demuestra que el núcleo FRTL es a la vez predecible y eficiente, dos características que informan de su utilidad en la implementación real de aplicaciones flexibles de tiempo real estricto.[CA] Aquesta tesi proposa un nou entorn general per a la construcció de sistemes flexibles de temps real estricte, això és, sistemes que requereixen garanties de temps real estricte i un comportament flexible. L'entorn proposat és capaç d'integrar tasques amb diferentsnivells de criticitat i diferents paradigmes de planificació al mateix sistema. Com a resultat, l'entorn permet proporcionar garanties detemps real estricte a les tasques crítiques i a més aconseguir una planificació adaptativa i intel¿ligent de les tasques menys crítiques. L'entorn es defineix en termes d'un model de tasques, una arquitectura software i un conjunt de serveis. El model de tasques proposa la construcció d'una aplicació flexible de temps real estricte com a un conjunt de tasques on cadascuna és estructurada com una seqüència de components obligatòries i opcionals. L'arquitectura software proposa la separació de l'execució de les tasques en dos nivells de planificació interrelacionats, de manera que un nivel planifica les components obligatòries mitjançant una política de planificació de temps real estricte mentre que l'altre nivell planifica les components opcionals mitjançant una política de planificació basada en la utilitat. El conjunt de serveis inclou, per una part, un sistema de comunicació entre les components de les tasques (tant obligatòries com opcionals) i, per una altra, una sèrie de mecanismes per a la detecció i tractament d'excepcions temporals produïdes en execució. Per altra banda, la present tesi mostra que el proposat entorn teòric pot ésser implementat realment. En concret, es presenta el diseny i la implementació d'un sistema d'execució (es a dir, un nucli de sistema operatiu) capaç de suportar les característiques d'aquest entorn. Aquest sistema, anomenat Flexible Real-Time Linux (FRTL), ha sigut desenvolupat a partir d'un nucli mínim existent anomenat Real-Time Linux (RT-Linux). Finalment, aquesta tesi presenta una caracterització temporal completa del sistema FRTL i mesures reals de la seua sobrecàrrega. La caracterització temporal ha permés el desenvolupament d'un test de garantia complet de tot el sistema (incloent l'aplicació i el nucli FRTL), que pot ésser utilitzat per a verificar les restriccions temporals de qualsevol aplicació implementada sobre FRTL. Per la seua part, les mesures de la sobrecàrrega de FRTL mostren que aquest nucli ha sigut disenyat i implementat de manera eficient. En conjunt, es demostra que el nucli FRTL és al mateix temps predible i eficient, dos característiques que informen de la seua utilitat a la implementació real d'aplicacions flexibles de temps real estricte.Terrasa Barrena, AM. (2001). Flexible Real-Time Linux a New Environment for Flexible Hard Real-Time Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1806
    corecore