10,690 research outputs found

    TANGO: Transparent heterogeneous hardware Architecture deployment for eNergy Gain in Operation

    Get PDF
    The paper is concerned with the issue of how software systems actually use Heterogeneous Parallel Architectures (HPAs), with the goal of optimizing power consumption on these resources. It argues the need for novel methods and tools to support software developers aiming to optimise power consumption resulting from designing, developing, deploying and running software on HPAs, while maintaining other quality aspects of software to adequate and agreed levels. To do so, a reference architecture to support energy efficiency at application construction, deployment, and operation is discussed, as well as its implementation and evaluation plans.Comment: Part of the Program Transformation for Programmability in Heterogeneous Architectures (PROHA) workshop, Barcelona, Spain, 12th March 2016, 7 pages, LaTeX, 3 PNG figure

    ATMP: An Adaptive Tolerance-based Mixed-criticality Protocol for Multi-core Systems

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted ncomponent of this work in other works.The challenge of mixed-criticality scheduling is to keep tasks of higher criticality running in case of resource shortages caused by faults. Traditionally, mixedcriticality scheduling has focused on methods to handle faults where tasks overrun their optimistic worst-case execution time (WCET) estimate. In this paper we present the Adaptive Tolerance based Mixed-criticality Protocol (ATMP), which generalises the concept of mixed-criticality scheduling to handle also faults of other nature, like failure of cores in a multi-core system. ATMP is an adaptation method triggered by resource shortage at runtime. The first step of ATMP is to re-partition the task to the available cores and the second step is to optimise the utility at each core using the tolerance-based real-time computing model (TRTCM). The evaluation shows that the utility optimisation of ATMP can achieve a smoother degradation of service compared to just abandoning tasks

    On the tailoring of CAST-32A certification guidance to real COTS multicore architectures

    Get PDF
    The use of Commercial Off-The-Shelf (COTS) multicores in real-time industry is on the rise due to multicores' potential performance increase and energy reduction. Yet, the unpredictable impact on timing of contention in shared hardware resources challenges certification. Furthermore, most safety certification standards target single-core architectures and do not provide explicit guidance for multicore processors. Recently, however, CAST-32A has been presented providing guidance for software planning, development and verification in multicores. In this paper, from a theoretical level, we provide a detailed review of CAST-32A objectives and the difficulty of reaching them under current COTS multicore design trends; at experimental level, we assess the difficulties of the application of CAST-32A to a real multicore processor, the NXP P4080.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO) under grant TIN2015-65316-P and the HiPEAC Network of Excellence. Jaume Abella has been partially supported by the MINECO under Ramon y Cajal grant RYC-2013-14717.Peer ReviewedPostprint (author's final draft

    High-Integrity Performance Monitoring Units in Automotive Chips for Reliable Timing V&V

    Get PDF
    As software continues to control more system-critical functions in cars, its timing is becoming an integral element in functional safety. Timing validation and verification (V&V) assesses softwares end-to-end timing measurements against given budgets. The advent of multicore processors with massive resource sharing reduces the significance of end-to-end execution times for timing V&V and requires reasoning on (worst-case) access delays on contention-prone hardware resources. While Performance Monitoring Units (PMU) support this finer-grained reasoning, their design has never been a prime consideration in high-performance processors - where automotive-chips PMU implementations descend from - since PMU does not directly affect performance or reliability. To meet PMUs instrumental importance for timing V&V, we advocate for PMUs in automotive chips that explicitly track activities related to worst-case (rather than average) softwares behavior, are recognized as an ISO-26262 mandatory high-integrity hardware service, and are accompanied with detailed documentation that enables their effective use to derive reliable timing estimatesThis work has also been partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO) under grant TIN2015-65316-P and the HiPEAC Network of Excellence. Jaume Abella has been partially supported by the MINECO under Ramon y Cajal postdoctoral fellowship number RYC-2013-14717. Enrico Mezzet has been partially supported by the Spanish Ministry of Economy and Competitiveness under Juan de la Cierva-Incorporación postdoctoral fellowship number IJCI-2016- 27396.Peer ReviewedPostprint (author's final draft

    Bao: A Lightweight Static Partitioning Hypervisor for Modern Multi-Core Embedded Systems

    Get PDF

    A Many-Core Platform with Run-Time Monitoring to Support Separation of Mixed-Criticality Applications

    Get PDF
    Mehr- und Vielkernplattformen bieten ausreichend Ressourcen für eine weitere Steigerung der Rechenleistung, zum einen für aufwendigere Anwendungen und zum anderen für die Integration mehrerer Anwendungen, welche sonst auf mehreren separaten Plattformen ausgeführt würden. Die große Anzahl an Ressourcen kann weiterhin dafür verwendet werden, einer Anwendung mehr Ressourcen als nötig redundant zuzuweisen oder zunächst unbenutzte Komponenten dazu zu verwenden, fehlerhafte Komponenten zur Laufzeit zu ersetzen, um so die Zuverlässigkeit und Verfügbarkeit von Anwendungen zu erhöhen. Hierfür muss eine Vielkernplattform eine transparente und flexible Zuordnung von Anwendungen erlauben, welche sich auch zur Laufzeit ändern lässt. Dasselbe gilt für die Kommunikationsverbindungen der Anwendungen mit verteilten Komponenten. Die vorliegende Arbeit präsentiert eine parametrisierbare und synthetisierbare Vielkernplattform, welche die genannten Bedingungen durch Virtualisierung erfüllt. Weiterhin bietet die Plattform Mechanismen zur Separierung unterschiedlich kritischer Anwendungen. Ohne eine ausreichende Separierung müssen alle Anwendungen die Anforderungen der Anwendung mit der höchsten Kritikalität erfüllen. Dies würde den Aufwand für weniger kritische Anwendungen stark erhöhen. Eine ausreichende Separierung ermöglicht die unabhängige Entwicklung und Zertifizierung einzelner Anwendungen. Die Separierung betrifft hierbei nicht nur die Unabhängigkeit einzelner Anwendungen in Bezug auf ihr Zeitverhalten und ihren Raumbedarf, sondern muss auch auf ihren Energieverbrauch erweitert werden, da die verfügbare Energie ebenfalls von allen Anwendungen gemeinsam genutzt wird. Ein erhöhter Energieverbrauch einer Anwendung kann die verfügbare Energie für andere Anwendungen einschränken und durch eine erhöhte thermische Belastung die Verfügbarkeit und Lebensdauer des gesamten Chips reduzieren. Neben der statischen Separierung durch eine exklusive Zuweisung von Ressourcen bietet die Plattform eine skalierbare Laufzeitüberwachung mit einer kurzen Reaktionszeit, welche eine sichere und effiziente gemeinsame Nutzung von Ressourcen erlaubt. Die Laufzeitüberwachung ermöglicht die Überwachung des spezifizierten Verhaltens einzelner Anwendungen und kann dieses bei Bedarf zur Laufzeit erzwingen. Insgesamt ist die Arbeit ein weiterer Schritt, um Vielkernplattformen für unterschiedlich kritische Anwendungen effizient nutzbar zu machen.Modern multi- and many-core platforms offer sufficient resources for further increasing the performance of advanced applications. Moreover they allow integrating multiple applications that formerly ran on multiple chips. The large amount of resources can additionally be used to map applications redundantly to more resources than required to increase reliability. Spare parts can be used to replace faulty components at run time for higher availability. A suitable platform must allow remapping of applications and replacement of peripherals dynamically. Mapping to distributed resources but also communication among resources ideally is transparent and flexible to allow changes at run time. In this thesis, a parameterizable and synthesizable many-core platform is presented, which realizes the requirements above by virtualizing all resources that are available on the platform. The platform is used as a research vehicle to develop mechanisms for separating applications of different criticalities on a shared platform. On a many-core platform that runs mixed-criticality applications, all applications have to be sufficiently separated. Otherwise all applications have to fulfill the requirements of the highest level of criticality, even low critical ones. This would significantly increase the costs of a shared platform. Separation enables individual development and certification of applications and cost-efficient recertification of single applications after an update. Separation does not only include independence in terms of time and space, but also in terms of power consumption as the available energy for a many-core system is shared between all running applications. Increased power consumption of one application may reduce the available energy for other applications or the reliability and lifetime of the complete chip. Beside static separation of mixed-criticality applications by assigning them to separate resources, a fast and scalable monitoring and control mechanism allows safe and efficient sharing of resources by enforcing specified behavior of applications at run time. All in all, this thesisÕ contribution is a step towards exploiting the benefits of multi- and many-core platforms for mixed-criticality applications

    A Survey of Research into Mixed Criticality Systems

    Get PDF
    This survey covers research into mixed criticality systems that has been published since Vestal’s seminal paper in 2007, up until the end of 2016. The survey is organised along the lines of the major research areas within this topic. These include single processor analysis (including fixed priority and EDF scheduling, shared resources and static and synchronous scheduling), multiprocessor analysis, realistic models, and systems issues. The survey also explores the relationship between research into mixed criticality systems and other topics such as hard and soft time constraints, fault tolerant scheduling, hierarchical scheduling, cyber physical systems, probabilistic real-time systems, and industrial safety standards

    Development and certification of mixed-criticality embedded systems based on probabilistic timing analysis

    Get PDF
    An increasing variety of emerging systems relentlessly replaces or augments the functionality of mechanical subsystems with embedded electronics. For quantity, complexity, and use, the safety of such subsystems is an increasingly important matter. Accordingly, those systems are subject to safety certification to demonstrate system's safety by rigorous development processes and hardware/software constraints. The massive augment in embedded processors' complexity renders the arduous certification task significantly harder to achieve. The focus of this thesis is to address the certification challenges in multicore architectures: despite their potential to integrate several applications on a single platform, their inherent complexity imperils their timing predictability and certification. Recently, the Measurement-Based Probabilistic Timing Analysis (MBPTA) technique emerged as an alternative to deal with hardware/software complexity. The innovation that MBPTA brings about is, however, a major step from current certification procedures and standards. The particular contributions of this Thesis include: (i) the definition of certification arguments for mixed-criticality integration upon multicore processors. In particular we propose a set of safety mechanisms and procedures as required to comply with functional safety standards. For timing predictability, (ii) we present a quantitative approach to assess the likelihood of execution-time exceedance events with respect to the risk reduction requirements on safety standards. To this end, we build upon the MBPTA approach and we present the design of a safety-related source of randomization (SoR), that plays a key role in the platform-level randomization needed by MBPTA. And (iii) we evaluate current certification guidance with respect to emerging high performance design trends like caches. Overall, this Thesis pushes the certification limits in the use of multicore and MBPTA technology in Critical Real-Time Embedded Systems (CRTES) and paves the way towards their adoption in industry.Una creciente variedad de sistemas emergentes reemplazan o aumentan la funcionalidad de subsistemas mecánicos con componentes electrónicos embebidos. El aumento en la cantidad y complejidad de dichos subsistemas electrónicos así como su cometido, hacen de su seguridad una cuestión de creciente importancia. Tanto es así que la comercialización de estos sistemas críticos está sujeta a rigurosos procesos de certificación donde se garantiza la seguridad del sistema mediante estrictas restricciones en el proceso de desarrollo y diseño de su hardware y software. Esta tesis trata de abordar los nuevos retos y dificultades dadas por la introducción de procesadores multi-núcleo en dichos sistemas críticos: aunque su mayor rendimiento despierta el interés de la industria para integrar múltiples aplicaciones en una sola plataforma, suponen una mayor complejidad. Su arquitectura desafía su análisis temporal mediante los métodos tradicionales y, asimismo, su certificación es cada vez más compleja y costosa. Con el fin de lidiar con estas limitaciones, recientemente se ha desarrollado una novedosa técnica de análisis temporal probabilístico basado en medidas (MBPTA). La innovación de esta técnica, sin embargo, supone un gran cambio cultural respecto a los estándares y procedimientos tradicionales de certificación. En esta línea, las contribuciones de esta tesis están agrupadas en tres ejes principales: (i) definición de argumentos de seguridad para la certificación de aplicaciones de criticidad-mixta sobre plataformas multi-núcleo. Se definen, en particular, mecanismos de seguridad, técnicas de diagnóstico y reacción de faltas acorde con el estándar IEC 61508 sobre una arquitectura multi-núcleo de referencia. Respecto al análisis temporal, (ii) presentamos la cuantificación de la probabilidad de exceder un límite temporal y su relación con los requisitos de reducción de riesgos derivados de los estándares de seguridad funcional. Con este fin, nos basamos en la técnica MBPTA y presentamos el diseño de una fuente de números aleatorios segura; un componente clave para conseguir las propiedades aleatorias requeridas por MBPTA a nivel de plataforma. Por último, (iii) extrapolamos las guías actuales para la certificación de arquitecturas multi-núcleo a una solución comercial de 8 núcleos y las evaluamos con respecto a las tendencias emergentes de diseño de alto rendimiento (caches). Con estas contribuciones, esta tesis trata de abordar los retos que el uso de procesadores multi-núcleo y MBPTA implican en el proceso de certificación de sistemas críticos de tiempo real y facilita, de esta forma, su adopción por la industria.Postprint (published version

    CONTREX: Design of embedded mixed-criticality CONTRol systems under consideration of EXtra-functional properties

    Get PDF
    The increasing processing power of today’s HW/SW platforms leads to the integration of more and more functions in a single device. Additional design challenges arise when these functions share computing resources and belong to different criticality levels. CONTREX complements current activities in the area of predictable computing platforms and segregation mechanisms with techniques to consider the extra-functional properties, i.e., timing constraints, power, and temperature. CONTREX enables energy efficient and cost aware design through analysis and optimization of these properties with regard to application demands at different criticality levels. This article presents an overview of the CONTREX European project, its main innovative technology (extension of a model based design approach, functional and extra-functional analysis with executable models and run-time management) and the final results of three industrial use-cases from different domain (avionics, automotive and telecommunication).The work leading to these results has received funding from the European Community’s Seventh Framework Programme FP7/2007-2011 under grant agreement no. 611146
    • …
    corecore