13 research outputs found

    Multi-core devices for safety-critical systems: a survey

    Get PDF
    Multi-core devices are envisioned to support the development of next-generation safety-critical systems, enabling the on-chip integration of functions of different criticality. This integration provides multiple system-level potential benefits such as cost, size, power, and weight reduction. However, safety certification becomes a challenge and several fundamental safety technical requirements must be addressed, such as temporal and spatial independence, reliability, and diagnostic coverage. This survey provides a categorization and overview at different device abstraction levels (nanoscale, component, and device) of selected key research contributions that support the compliance with these fundamental safety requirements.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness under grant TIN2015-65316-P, Basque Government under grant KK-2019-00035 and the HiPEAC Network of Excellence. The Spanish Ministry of Economy and Competitiveness has also partially supported Jaume Abella under Ramon y Cajal postdoctoral fellowship (RYC-2013-14717).Peer ReviewedPostprint (author's final draft

    Development and certification of mixed-criticality embedded systems based on probabilistic timing analysis

    Get PDF
    An increasing variety of emerging systems relentlessly replaces or augments the functionality of mechanical subsystems with embedded electronics. For quantity, complexity, and use, the safety of such subsystems is an increasingly important matter. Accordingly, those systems are subject to safety certification to demonstrate system's safety by rigorous development processes and hardware/software constraints. The massive augment in embedded processors' complexity renders the arduous certification task significantly harder to achieve. The focus of this thesis is to address the certification challenges in multicore architectures: despite their potential to integrate several applications on a single platform, their inherent complexity imperils their timing predictability and certification. Recently, the Measurement-Based Probabilistic Timing Analysis (MBPTA) technique emerged as an alternative to deal with hardware/software complexity. The innovation that MBPTA brings about is, however, a major step from current certification procedures and standards. The particular contributions of this Thesis include: (i) the definition of certification arguments for mixed-criticality integration upon multicore processors. In particular we propose a set of safety mechanisms and procedures as required to comply with functional safety standards. For timing predictability, (ii) we present a quantitative approach to assess the likelihood of execution-time exceedance events with respect to the risk reduction requirements on safety standards. To this end, we build upon the MBPTA approach and we present the design of a safety-related source of randomization (SoR), that plays a key role in the platform-level randomization needed by MBPTA. And (iii) we evaluate current certification guidance with respect to emerging high performance design trends like caches. Overall, this Thesis pushes the certification limits in the use of multicore and MBPTA technology in Critical Real-Time Embedded Systems (CRTES) and paves the way towards their adoption in industry.Una creciente variedad de sistemas emergentes reemplazan o aumentan la funcionalidad de subsistemas mecĂĄnicos con componentes electrĂłnicos embebidos. El aumento en la cantidad y complejidad de dichos subsistemas electrĂłnicos asĂ­ como su cometido, hacen de su seguridad una cuestiĂłn de creciente importancia. Tanto es asĂ­ que la comercializaciĂłn de estos sistemas crĂ­ticos estĂĄ sujeta a rigurosos procesos de certificaciĂłn donde se garantiza la seguridad del sistema mediante estrictas restricciones en el proceso de desarrollo y diseño de su hardware y software. Esta tesis trata de abordar los nuevos retos y dificultades dadas por la introducciĂłn de procesadores multi-nĂșcleo en dichos sistemas crĂ­ticos: aunque su mayor rendimiento despierta el interĂ©s de la industria para integrar mĂșltiples aplicaciones en una sola plataforma, suponen una mayor complejidad. Su arquitectura desafĂ­a su anĂĄlisis temporal mediante los mĂ©todos tradicionales y, asimismo, su certificaciĂłn es cada vez mĂĄs compleja y costosa. Con el fin de lidiar con estas limitaciones, recientemente se ha desarrollado una novedosa tĂ©cnica de anĂĄlisis temporal probabilĂ­stico basado en medidas (MBPTA). La innovaciĂłn de esta tĂ©cnica, sin embargo, supone un gran cambio cultural respecto a los estĂĄndares y procedimientos tradicionales de certificaciĂłn. En esta lĂ­nea, las contribuciones de esta tesis estĂĄn agrupadas en tres ejes principales: (i) definiciĂłn de argumentos de seguridad para la certificaciĂłn de aplicaciones de criticidad-mixta sobre plataformas multi-nĂșcleo. Se definen, en particular, mecanismos de seguridad, tĂ©cnicas de diagnĂłstico y reacciĂłn de faltas acorde con el estĂĄndar IEC 61508 sobre una arquitectura multi-nĂșcleo de referencia. Respecto al anĂĄlisis temporal, (ii) presentamos la cuantificaciĂłn de la probabilidad de exceder un lĂ­mite temporal y su relaciĂłn con los requisitos de reducciĂłn de riesgos derivados de los estĂĄndares de seguridad funcional. Con este fin, nos basamos en la tĂ©cnica MBPTA y presentamos el diseño de una fuente de nĂșmeros aleatorios segura; un componente clave para conseguir las propiedades aleatorias requeridas por MBPTA a nivel de plataforma. Por Ășltimo, (iii) extrapolamos las guĂ­as actuales para la certificaciĂłn de arquitecturas multi-nĂșcleo a una soluciĂłn comercial de 8 nĂșcleos y las evaluamos con respecto a las tendencias emergentes de diseño de alto rendimiento (caches). Con estas contribuciones, esta tesis trata de abordar los retos que el uso de procesadores multi-nĂșcleo y MBPTA implican en el proceso de certificaciĂłn de sistemas crĂ­ticos de tiempo real y facilita, de esta forma, su adopciĂłn por la industria.Postprint (published version

    A Survey of Research into Mixed Criticality Systems

    Get PDF
    This survey covers research into mixed criticality systems that has been published since Vestal’s seminal paper in 2007, up until the end of 2016. The survey is organised along the lines of the major research areas within this topic. These include single processor analysis (including fixed priority and EDF scheduling, shared resources and static and synchronous scheduling), multiprocessor analysis, realistic models, and systems issues. The survey also explores the relationship between research into mixed criticality systems and other topics such as hard and soft time constraints, fault tolerant scheduling, hierarchical scheduling, cyber physical systems, probabilistic real-time systems, and industrial safety standards

    Mixed-Criticality Systems on Commercial-Off-the-Shelf Multi-Processor Systems-on-Chip

    Get PDF
    Avionics and space industries are struggling with the adoption of technologies like multi-processor system-on-chips (MPSoCs) due to strict safety requirements. This thesis propose a new reference architecture for MPSoC-based mixed-criticality systems (MCS) - i.e., systems integrating applications with different level of criticality - which are a common use case for aforementioned industries. This thesis proposes a system architecture capable of granting partitioning - which is, for short, the property of fault containment. It is based on the detection of spatial and temporal interference, and has been named the online detection of interference (ODIn) architecture. Spatial partitioning requires that an application is not able to corrupt resources used by a different application. In the architecture proposed in this thesis, spatial partitioning is implemented using type-1 hypervisors, which allow definition of resource partitions. An application running in a partition can only access resources granted to that partition, therefore it cannot corrupt resources used by applications running in other partitions. Temporal partitioning requires that an application is not able to unexpectedly change the execution time of other applications. In the proposed architecture, temporal partitioning has been solved using a bounded interference approach, composed of an offline analysis phase and an online safety net. The offline phase is based on a statistical profiling of a metric sensitive to temporal interference’s, performed in nominal conditions, which allows definition of a set of three thresholds: 1. the detection threshold TD; 2. the warning threshold TW ; 3. the α threshold. Two rules of detection are defined using such thresholds: Alarm rule When the value of the metric is above TD. Warning rule When the value of the metric is in the warning region [TW ;TD] for more than α consecutive times. ODIn’s online safety-net exploits performance counters, available in many MPSoC architectures; such counters are configured at bootstrap to monitor the selected metric(s), and to raise an interrupt request (IRQ) in case the metric value goes above TD, implementing the alarm rule. The warning rule is implemented in a software detection module, which reads the value of performance counters when the monitored task yields control to the scheduler and reset them if there is no detection. ODIn also uses two additional detection mechanisms: 1. a control flow check technique, based on compile-time defined block signatures, is implemented through a set of watchdog processors, each monitoring one partition. 2. a timeout is implemented through a system watchdog timer (SWDT), which is able to send an external signal when the timeout is violated. The recovery actions implemented in ODIn are: ‱ graceful degradation, to react to IRQs of WDPs monitoring non-critical applications or to warning rule violations; it temporarily stops non-critical applications to grant resources to the critical application; ‱ hard recovery, to react to the SWDT, to the WDP of the critical application, or to alarm rule violations; it causes a switch to a hot stand-by spare computer. Experimental validation of ODIn was performed on two hardware platforms: the ZedBoard - dual-core - and the Inventami board - quad-core. A space benchmark and an avionic benchmark were implemented on both platforms, composed by different modules as showed in Table 1 Each version of the final application was evaluated through fault injection (FI) campaigns, performed using a specifically designed FI system. There were three types of FI campaigns: 1. HW FI, to emulate single event effects; 2. SW FI, to emulate bugs in non-critical applications; 3. artificial bug FI, to emulate a bug in non-critical applications introducing unexpected interference on the critical application. Experimental results show that ODIn is resilient to all considered types of faul

    A novel parallel algorithm for surface editing and its FPGA implementation

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophySurface modelling and editing is one of important subjects in computer graphics. Decades of research in computer graphics has been carried out on both low-level, hardware-related algorithms and high-level, abstract software. Success of computer graphics has been seen in many application areas, such as multimedia, visualisation, virtual reality and the Internet. However, the hardware realisation of OpenGL architecture based on FPGA (field programmable gate array) is beyond the scope of most of computer graphics researches. It is an uncultivated research area where the OpenGL pipeline, from hardware through the whole embedded system (ES) up to applications, is implemented in an FPGA chip. This research proposes a hybrid approach to investigating both software and hardware methods. It aims at bridging the gap between methods of software and hardware, and enhancing the overall performance for computer graphics. It consists of four parts, the construction of an FPGA-based ES, Mesa-OpenGL implementation for FPGA-based ESs, parallel processing, and a novel algorithm for surface modelling and editing. The FPGA-based ES is built up. In addition to the Nios II soft processor and DDR SDRAM memory, it consists of the LCD display device, frame buffers, video pipeline, and algorithm-specified module to support the graphics processing. Since there is no implementation of OpenGL ES available for FPGA-based ESs, a specific OpenGL implementation based on Mesa is carried out. Because of the limited FPGA resources, the implementation adopts the fixed-point arithmetic, which can offer faster computing and lower storage than the floating point arithmetic, and the accuracy satisfying the needs of 3D rendering. Moreover, the implementation includes BĂ©zier-spline curve and surface algorithms to support surface modelling and editing. The pipelined parallelism and co-processors are used to accelerate graphics processing in this research. These two parallelism methods extend the traditional computation parallelism in fine-grained parallel tasks in the FPGA-base ESs. The novel algorithm for surface modelling and editing, called Progressive and Mixing Algorithm (PAMA), is proposed and implemented on FPGA-based ES’s. Compared with two main surface editing methods, subdivision and deformation, the PAMA can eliminate the large storage requirement and computing cost of intermediated processes. With four independent shape parameters, the PAMA can be used to model and edit freely the shape of an open or closed surface that keeps globally the zero-order geometric continuity. The PAMA can be applied independently not only FPGA-based ESs but also other platforms. With the parallel processing, small size, and low costs of computing, storage and power, the FPGA-based ES provides an effective hybrid solution to surface modelling and editing

    Mixed Criticality Systems - A Review : (13th Edition, February 2022)

    Get PDF
    This review covers research on the topic of mixed criticality systems that has been published since Vestal’s 2007 paper. It covers the period up to end of 2021. The review is organised into the following topics: introduction and motivation, models, single processor analysis (including job-based, hard and soft tasks, fixed priority and EDF scheduling, shared resources and static and synchronous scheduling), multiprocessor analysis, related topics, realistic models, formal treatments, systems issues, industrial practice and research beyond mixed-criticality. A list of PhDs awarded for research relating to mixed-criticality systems is also included

    Virtual Timing Isolation Safety-Net for Multicore Processors

    Get PDF
    Multicore processors promise to offer the performance as well as the reduced space, weight and power needed by future aircrafts. However, commercial off-the-shelf multicore processors suffer from timing interferences between cores which complicates applying them in hard real-time systems like avionic applications. In this thesis, a safety-net system is proposed which enables a virtual timing isolation of applications running on one core from all other cores. The technique is based on hardware external to the multicore processor and completely transparent to the applications, i.e. no modification of the observed software is necessary. The basic idea is to apply a single-core execution based worst-case execution time analysis and to accept a predefined slowdown during multicore execution. If the slowdown exceeds the acceptable bounds, interferences will be reduced by controlling the behavior of low-critical cores to keep the main application’s progress inside the given bounds. Measuring the progress of the applications running on the main core is performed by tracking the application’s fingerprint. A fingerprint is created by extraction of the performance counters of the critical core in very small timesteps which results in a characteristic curve for every execution of a periodic program. In standalone mode, without any running applications on the other cores, a model of an application is created by clustering and combining the extracted curves. During runtime, the extracted performance counter values are compared to the model to determine the progress of the critical application. In case the progress of an application is unacceptably delayed, the cores creating the interferences are throttled. The interference creating cores are determined by the accesses of the respective cores to the shared resources. A controller that takes the progress of a critical application as well as the time until the final deadline into account throttles the low priority cores. Throttling is either performed by frequency scaling of the interfering cores or by halt and continue with a pulse width modulation scheme. The complete safety-net system was evaluated on a TACLeBench benchmark running on an NXP P4080 multicore processor observed by a Xilinx FPGA implementing a MicroBlaze soft-core microcontroller. The results show that the progress can be measured by the fingerprinting with a final deviation of less than 1% for a TACLeBench execution with running opponent cores and indicate the non-intrusiveness of the approach. Several experiments are conducted to demonstrate the effectiveness of the different throttling mechanisms. Evaluations using a real-world avionic application show that the approach can be applied to integrated modular avionic applications. The safety-net does not ensure robust partitioning in the conventional meaning. The applications on the different cores can influence each other in the timing domain, but the external safety-net ensures that the interference on the high critical application is low enough to keep the timing. This allows for an efficient utilization of the multicore processor. Every critical application is treated individually, and by relying on individual models recorded in standalone mode, the critical as well as the non-critical applications running on the other cores can be exchanged without recreating a fingerprint model. This eases the porting of legacy applications to the multicore processor and allows the exchange of applications without recertification.Der Einsatz von Multicore Prozessoren in Avioniksystemen verspricht sowohl die Performancesteigerung als auch den reduzierten Platz-, Gewichts- und Energieverbrauch, der zur Realisierung von zukĂŒnftigen Flugzeugen benötigt wird. Die Verwendung von seriengefertigten (COTS) Multicore Prozessoren in sicherheitskritischen Echtzeitsystemen ist jedoch sehr komplex, da eine gegenseitige zeitliche Beeinflussung der Anwendungen auf den unterschiedlichen Kernen nicht ausgeschlossen werden kann. In dieser Arbeit wird ein Konzept vorgestellt, das eine virtuelle zeitliche Trennung der Anwendungen, die auf einem Prozessorkern ausgefĂŒhrt werden, von denen der ĂŒbrigen Kerne ermöglicht. Die Grundidee besteht darin, eine auf einer Single-Core-AusfĂŒhrung basierende Laufzeitanalyse (WCET) durchzufĂŒhren und eine vordefinierte Verlangsamung wĂ€hrend der Multicore-AusfĂŒhrung zu akzeptieren. Wenn die Verlangsamung die zulĂ€ssige Grenze ĂŒberschreitet, wird das Verhalten niedrigkritischer Kerne so gesteuert, dass der Fortschritt der Hauptanwendung innerhalb der Deadlines bleibt. Die Bestimmung des Fortschritts der kritischen Anwendungen erfolgt durch das Verfolgen eines sogenannten Fingerprints. Ein Fingerprint wird durch Auslesen der Performance Counter des kritischen Kerns in sehr kleinen Zeitschritten erzeugt, was zu einer charakteristischen Kurve fĂŒr jede AusfĂŒhrung eines periodischen Programms fĂŒhrt. Ein Modell einer Anwendung wird erstellt, indem die extrahierten Kurven gruppiert und kombiniert werden. WĂ€hrend der Laufzeit werden die ausgelesenen Werte mit dem Modell verglichen, um den Fortschritt zu bestimmen. Falls die zeitliche AusfĂŒhrung einer ktitischen Anwendung zu stark verzögert wird, werden die Kerne gedrosselt, welche die Störungen verursachen. Das Konzept wurde mit einem TACLeBench-Benchmark evaluiert, der auf einem NXP P4080 Multicore Prozessor ausgefĂŒht, und von einem Xilinx-FPGA beobachtet wurde. Es konnte gezeigt werden, dass der Fortschritt durch den Fingerprint mit einer endgĂŒltigen Abweichung von weniger als 1% fĂŒr eine TACLeBench-AusfĂŒhrung mit laufenden konkurrierenden Kernen gemessen werden kann. Die Evaluation mit einer realen Avionik-Anwendung zeigte, dass das Konzept fĂŒr integrierte modulare Avionik-Anwendungen (IMA) genutzt werden kann. Der Ansatz gewĂ€hrleistet keine robuste Partitionierung im herkömmlichen Sinne. Die Anwendungen auf den verschiedenen Kernen können sich zeitlich gegenseitig beeinflussen, aber ein externes Sicherheitsnetz stellt sicher, dass die Verlangsamung der hochkritischen Anwendung niedrig genug ist, um die Deadlines zu halten. Dies ermöglicht eine effiziente Auslastung des Multicore Prozessors. Außerdem wird jede kritische Anwendung einzeln behandelt und verfĂŒgt ĂŒber ein individuelles Modell. Somit können die kritischen und nicht kritischen Anwendungen, die auf den anderen Kernen ausgefĂŒhrt werden, ausgetauscht werden, ohne ein Modell neu zu erstellen. Dies vereinfacht die Portierung von bestehenden Anwendungen auf Multicore Prozessoren und ermöglicht den Austausch von Anwendungen ohne eine erneute Zertifizierung

    Medical devices with embedded electronics: design and development methodology for start-ups

    Get PDF
    358 p.El sector de la biotecnología demanda innovación constante para hacer frente a los retos del sector sanitario. Hechos como la reciente pandemia COVID-19, el envejecimiento de la población, el aumento de las tasas de dependencia o la necesidad de promover la asistencia sanitaria personalizada tanto en entorno hospitalario como domiciliario, ponen de manifiesto la necesidad de desarrollar dispositivos médicos de monitorización y diagnostico cada vez mås sofisticados, fiables y conectados de forma råpida y eficaz. En este escenario, los sistemas embebidos se han convertido en tecnología clave para el diseño de soluciones innovadoras de bajo coste y de forma råpida. Conscientes de la oportunidad que existe en el sector, cada vez son mås las denominadas "biotech start-ups" las que se embarcan en el negocio de los dispositivos médicos. Pese a tener grandes ideas y soluciones técnicas, muchas terminan fracasando por desconocimiento del sector sanitario y de los requisitos regulatorios que se deben cumplir. La gran cantidad de requisitos técnicos y regulatorios hace que sea necesario disponer de una metodología procedimental para ejecutar dichos desarrollos. Por ello, esta tesis define y valida una metodología para el diseño y desarrollo de dispositivos médicos embebidos

    A Many-Core Platform with Run-Time Monitoring to Support Separation of Mixed-Criticality Applications

    Get PDF
    Mehr- und Vielkernplattformen bieten ausreichend Ressourcen fĂŒr eine weitere Steigerung der Rechenleistung, zum einen fĂŒr aufwendigere Anwendungen und zum anderen fĂŒr die Integration mehrerer Anwendungen, welche sonst auf mehreren separaten Plattformen ausgefĂŒhrt wĂŒrden. Die große Anzahl an Ressourcen kann weiterhin dafĂŒr verwendet werden, einer Anwendung mehr Ressourcen als nötig redundant zuzuweisen oder zunĂ€chst unbenutzte Komponenten dazu zu verwenden, fehlerhafte Komponenten zur Laufzeit zu ersetzen, um so die ZuverlĂ€ssigkeit und VerfĂŒgbarkeit von Anwendungen zu erhöhen. HierfĂŒr muss eine Vielkernplattform eine transparente und flexible Zuordnung von Anwendungen erlauben, welche sich auch zur Laufzeit Ă€ndern lĂ€sst. Dasselbe gilt fĂŒr die Kommunikationsverbindungen der Anwendungen mit verteilten Komponenten. Die vorliegende Arbeit prĂ€sentiert eine parametrisierbare und synthetisierbare Vielkernplattform, welche die genannten Bedingungen durch Virtualisierung erfĂŒllt. Weiterhin bietet die Plattform Mechanismen zur Separierung unterschiedlich kritischer Anwendungen. Ohne eine ausreichende Separierung mĂŒssen alle Anwendungen die Anforderungen der Anwendung mit der höchsten KritikalitĂ€t erfĂŒllen. Dies wĂŒrde den Aufwand fĂŒr weniger kritische Anwendungen stark erhöhen. Eine ausreichende Separierung ermöglicht die unabhĂ€ngige Entwicklung und Zertifizierung einzelner Anwendungen. Die Separierung betrifft hierbei nicht nur die UnabhĂ€ngigkeit einzelner Anwendungen in Bezug auf ihr Zeitverhalten und ihren Raumbedarf, sondern muss auch auf ihren Energieverbrauch erweitert werden, da die verfĂŒgbare Energie ebenfalls von allen Anwendungen gemeinsam genutzt wird. Ein erhöhter Energieverbrauch einer Anwendung kann die verfĂŒgbare Energie fĂŒr andere Anwendungen einschrĂ€nken und durch eine erhöhte thermische Belastung die VerfĂŒgbarkeit und Lebensdauer des gesamten Chips reduzieren. Neben der statischen Separierung durch eine exklusive Zuweisung von Ressourcen bietet die Plattform eine skalierbare LaufzeitĂŒberwachung mit einer kurzen Reaktionszeit, welche eine sichere und effiziente gemeinsame Nutzung von Ressourcen erlaubt. Die LaufzeitĂŒberwachung ermöglicht die Überwachung des spezifizierten Verhaltens einzelner Anwendungen und kann dieses bei Bedarf zur Laufzeit erzwingen. Insgesamt ist die Arbeit ein weiterer Schritt, um Vielkernplattformen fĂŒr unterschiedlich kritische Anwendungen effizient nutzbar zu machen.Modern multi- and many-core platforms offer sufficient resources for further increasing the performance of advanced applications. Moreover they allow integrating multiple applications that formerly ran on multiple chips. The large amount of resources can additionally be used to map applications redundantly to more resources than required to increase reliability. Spare parts can be used to replace faulty components at run time for higher availability. A suitable platform must allow remapping of applications and replacement of peripherals dynamically. Mapping to distributed resources but also communication among resources ideally is transparent and flexible to allow changes at run time. In this thesis, a parameterizable and synthesizable many-core platform is presented, which realizes the requirements above by virtualizing all resources that are available on the platform. The platform is used as a research vehicle to develop mechanisms for separating applications of different criticalities on a shared platform. On a many-core platform that runs mixed-criticality applications, all applications have to be sufficiently separated. Otherwise all applications have to fulfill the requirements of the highest level of criticality, even low critical ones. This would significantly increase the costs of a shared platform. Separation enables individual development and certification of applications and cost-efficient recertification of single applications after an update. Separation does not only include independence in terms of time and space, but also in terms of power consumption as the available energy for a many-core system is shared between all running applications. Increased power consumption of one application may reduce the available energy for other applications or the reliability and lifetime of the complete chip. Beside static separation of mixed-criticality applications by assigning them to separate resources, a fast and scalable monitoring and control mechanism allows safe and efficient sharing of resources by enforcing specified behavior of applications at run time. All in all, this thesisÕ contribution is a step towards exploiting the benefits of multi- and many-core platforms for mixed-criticality applications

    Custom optimization algorithms for efficient hardware implementation

    No full text
    The focus is on real-time optimal decision making with application in advanced control systems. These computationally intensive schemes, which involve the repeated solution of (convex) optimization problems within a sampling interval, require more efficient computational methods than currently available for extending their application to highly dynamical systems and setups with resource-constrained embedded computing platforms. A range of techniques are proposed to exploit synergies between digital hardware, numerical analysis and algorithm design. These techniques build on top of parameterisable hardware code generation tools that generate VHDL code describing custom computing architectures for interior-point methods and a range of first-order constrained optimization methods. Since memory limitations are often important in embedded implementations we develop a custom storage scheme for KKT matrices arising in interior-point methods for control, which reduces memory requirements significantly and prevents I/O bandwidth limitations from affecting the performance in our implementations. To take advantage of the trend towards parallel computing architectures and to exploit the special characteristics of our custom architectures we propose several high-level parallel optimal control schemes that can reduce computation time. A novel optimization formulation was devised for reducing the computational effort in solving certain problems independent of the computing platform used. In order to be able to solve optimization problems in fixed-point arithmetic, which is significantly more resource-efficient than floating-point, tailored linear algebra algorithms were developed for solving the linear systems that form the computational bottleneck in many optimization methods. These methods come with guarantees for reliable operation. We also provide finite-precision error analysis for fixed-point implementations of first-order methods that can be used to minimize the use of resources while meeting accuracy specifications. The suggested techniques are demonstrated on several practical examples, including a hardware-in-the-loop setup for optimization-based control of a large airliner.Open Acces
    corecore