9 research outputs found

    Nova combinação de hardware e de software para veículos de desporto automóvel baseada no processamento directo de funções gráficas

    Get PDF
    Doutoramento em Engenharia EletrónicaThe main motivation for the work presented here began with previously conducted experiments with a programming concept at the time named "Macro". These experiments led to the conviction that it would be possible to build a system of engine control from scratch, which could eliminate many of the current problems of engine management systems in a direct and intrinsic way. It was also hoped that it would minimize the full range of software and hardware needed to make a final and fully functional system. Initially, this paper proposes to make a comprehensive survey of the state of the art in the specific area of software and corresponding hardware of automotive tools and automotive ECUs. Problems arising from such software will be identified, and it will be clear that practically all of these problems stem directly or indirectly from the fact that we continue to make comprehensive use of extremely long and complex "tool chains". Similarly, in the hardware, it will be argued that the problems stem from the extreme complexity and inter-dependency inside processor architectures. The conclusions are presented through an extensive list of "pitfalls" which will be thoroughly enumerated, identified and characterized. Solutions will also be proposed for the various current issues and for the implementation of these same solutions. All this final work will be part of a "proof-of-concept" system called "ECU2010". The central element of this system is the before mentioned "Macro" concept, which is an graphical block representing one of many operations required in a automotive system having arithmetic, logic, filtering, integration, multiplexing functions among others. The end result of the proposed work is a single tool, fully integrated, enabling the development and management of the entire system in one simple visual interface. Part of the presented result relies on a hardware platform fully adapted to the software, as well as enabling high flexibility and scalability in addition to using exactly the same technology for ECU, data logger and peripherals alike. Current systems rely on a mostly evolutionary path, only allowing online calibration of parameters, but never the online alteration of their own automotive functionality algorithms. By contrast, the system developed and described in this thesis had the advantage of following a "clean-slate" approach, whereby everything could be rethought globally. In the end, out of all the system characteristics, "LIVE-Prototyping" is the most relevant feature, allowing the adjustment of automotive algorithms (eg. Injection, ignition, lambda control, etc.) 100% online, keeping the engine constantly working, without ever having to stop or reboot to make such changes. This consequently eliminates any "turnaround delay" typically present in current automotive systems, thereby enhancing the efficiency and handling of such systems.A principal motivação para o trabalho que conduziu a esta tese residiu na constatação de que os actuais métodos de modelação de centralinas automóveis conduzem a significativos problemas de desenvolvimento e manutenção. Como resultado dessa constatação, o objectivo deste trabalho centrou-se no desenvolvimento de um conceito de arquitectura que rompe radicalmente com os modelos state-of-the-art e que assenta num conjunto de conceitos que vieram a ser designados de "Macro" e "Celular ECU". Com este modelo pretendeu-se simultaneamente minimizar a panóplia de software e de hardware necessários à obtenção de uma sistema funcional final. Inicialmente, esta tese propõem-se fazer um levantamento exaustivo do estado da arte na área específica do software e correspondente hardware das ferramentas e centralinas automóveis. Os problemas decorrentes de tal software serão identificados e, dessa identificação deverá ficar claro, que praticamente todos esses problemas têm origem directa ou indirecta no facto de se continuar a fazer um uso exaustivo de "tool chains" extremamente compridas e complexas. De forma semelhante, no hardware, os problemas têm origem na extrema complexidade e inter-dependência das arquitecturas dos processadores. As consequências distribuem-se por uma extensa lista de "pitfalls" que também serão exaustivamente enumeradas, identificadas e caracterizadas. São ainda propostas soluções para os diversos problemas actuais e correspondentes implementações dessas mesmas soluções. Todo este trabalho final faz parte de um sistema "proof-of-concept" designado "ECU2010". O elemento central deste sistema é o já referido conceito de “Macro”, que consiste num bloco gráfico que representa uma de muitas operações necessárias num sistema automóvel, como sejam funções aritméticas, lógicas, de filtragem, de integração, de multiplexagem, entre outras. O resultado final do trabalho proposto assenta numa única ferramenta, totalmente integrada que permite o desenvolvimento e gestão de todo o sistema de forma simples numa única interface visual. Parte do resultado apresentado assenta numa plataforma hardware totalmente adaptada ao software, bem como na elevada flexibilidade e escalabilidade, para além de permitir a utilização de exactamente a mesma tecnologia quer para a centralina, como para o datalogger e para os periféricos. Os sistemas actuais assentam num percurso maioritariamente evolutivo, apenas permitindo a calibração online de parâmetros, mas nunca a alteração online dos próprios algoritmos das funcionalidades automóveis. Pelo contrário, o sistema desenvolvido e descrito nesta tese apresenta a vantagem de seguir um "clean-slate approach", pelo que tudo pode ser globalmente repensado. No final e para além de todas as restantes características, o “LIVE-PROTOTYPING” é a funcionalidade mais relevante, ao permitir alterar algoritmos automóveis (ex: injecção, ignição, controlo lambda, etc.) de forma 100% online, mantendo o motor constantemente a trabalhar e sem nunca ter de o parar ou re-arrancar para efectuar tais alterações. Isto elimina consequentemente qualquer "turnaround delay" tipicamente presente em qualquer sistema automóvel actual, aumentando de forma significativa a eficiência global do sistema e da sua utilização

    Un meta-modèle de composants pour la réalisation d'applications temps-réel flexibles et modulaires

    Get PDF
    The increase of software complexity along the years has led researchers in the software engineering field to look for approaches for conceiving and designing new systems. For instance, the service-oriented architectures approach is considered nowadays as the most advanced way to develop and integrate fastly modular and flexible applications. One of the software engineering solutions principles is re-usability, and consequently generality, which complicates its appilication in systems where optimizations are often used, like real-time systems. Thus, create real-time systems is expensive, because they must be conceived from scratch. In addition, most real-time systems do not beneficiate of the advantages which comes with software engineering approches, such as modularity and flexibility. This thesis aim to take real time aspects into account on popular and standard SOA solutions, in order to ease the design and development of modular and flexible applications. This will be done by means of a component-based real-time application model, which allows the dynamic reconfiguration of the application architecture. The component model will be an extension to the SCA standard, which integrates quality of service attributs onto the service consumer and provider in order to stablish a real-time specific service level agreement. This model will be executed on the top of a OSGi service platform, the standard de facto for development of modular applications in Java.La croissante complexité du logiciel a mené les chercheurs en génie logiciel à chercher des approcher pour concevoir et projéter des nouveaux systèmes. Par exemple, l'approche des architectures orientées services (SOA) est considérée actuellement comme le moyen le plus avancé pour réaliser et intégrer rapidement des applications modulaires et flexibles. Une des principales préocuppations des solutions en génie logiciel et la réutilisation, et par conséquent, la généralité de la solution, ce qui peut empêcher son application dans des systèmes où des optimisation sont souvent utilisées, tels que les systèmes temps réels. Ainsi, créer un système temps réel est devenu très couteux. De plus, la plupart des systèmes temps réel ne beneficient pas des facilités apportées par le genie logiciel, tels que la modularité et la flexibilité. Le but de cette thèse c'est de prendre en compte ces aspects temps réel dans des solutions populaires et standards SOA pour faciliter la conception et le développement d'applications temps réel flexibles et modulaires. Cela sera fait à l'aide d'un modèle d'applications temps réel orienté composant autorisant des modifications dynamiques dans l'architecture de l'application. Le modèle de composant sera une extension au standard SCA qui intègre des attributs de qualité de service sur le consomateur et le fournisseur de services pour l'établissement d'un accord de niveau de service spécifique au temps réel. Ce modèle sera executé sur une plateforme de services OSGi, le standard de facto pour le developpement d'applications modulaires en Java

    Adaptive Mid-term and Short-term Scheduling of Mixed-criticality Systems

    Get PDF
    A mixed-criticality real-time system is a real-time system having multiple tasks classified according to their criticality. Research on mixed-criticality systems started to provide an effective and cost efficient a priori verification process for safety critical systems. The higher the criticality of a task within a system and the more the system should guarantee the required level of service for it. However, such model poses new challenges with respect to scheduling and fault tolerance within real-time systems. Currently, mixed-criticality scheduling protocols severely degrade lower criticality tasks in case of resource shortage to provide the required level of service for the most critical ones. The actual research challenge in this field is to devise robust scheduling protocols to minimise the impact on less critical tasks. This dissertation introduces two approaches, one short-term and the other medium-term, to appropriately allocate computing resources to tasks within mixed-criticality systems both on uniprocessor and multiprocessor systems. The short-term strategy consists of a protocol named Lazy Bailout Protocol (LBP) to schedule mixed-criticality task sets on single core architectures. Scheduling decisions are made about tasks that are active in the ready queue and that have to be dispatched to the CPU. LBP minimises the service degradation for lower criticality tasks by providing to them a background execution during the system idle time. After, I refined LBP with variants that aim to further increase the service level provided for lower criticality tasks. However, this is achieved at an increased cost of either system offline analysis or complexity at runtime. The second approach, named Adaptive Tolerance-based Mixed-criticality Protocol (ATMP), decides at runtime which task has to be allocated to the active cores according to the available resources. ATMP permits to optimise the overall system utility by tuning the system workload in case of shortage of computing capacity at runtime. Unlike the majority of current mixed-criticality approaches, ATMP allows to smoothly degrade also higher criticality tasks to keep allocated lower criticality ones

    Model-Based Engineering of Collaborative Embedded Systems

    Get PDF
    This Open Access book presents the results of the "Collaborative Embedded Systems" (CrESt) project, aimed at adapting and complementing the methodology underlying modeling techniques developed to cope with the challenges of the dynamic structures of collaborative embedded systems (CESs) based on the SPES development methodology. In order to manage the high complexity of the individual systems and the dynamically formed interaction structures at runtime, advanced and powerful development methods are required that extend the current state of the art in the development of embedded systems and cyber-physical systems. The methodological contributions of the project support the effective and efficient development of CESs in dynamic and uncertain contexts, with special emphasis on the reliability and variability of individual systems and the creation of networks of such systems at runtime. The project was funded by the German Federal Ministry of Education and Research (BMBF), and the case studies are therefore selected from areas that are highly relevant for Germany’s economy (automotive, industrial production, power generation, and robotics). It also supports the digitalization of complex and transformable industrial plants in the context of the German government's "Industry 4.0" initiative, and the project results provide a solid foundation for implementing the German government's high-tech strategy "Innovations for Germany" in the coming years

    Model-connected safety cases

    Get PDF
    Regulatory authorities require justification that safety-critical systems exhibit acceptable levels of safety. Safety cases are traditionally documents which allow the exchange of information between stakeholders and communicate the rationale of how safety is achieved via a clear, convincing and comprehensive argument and its supporting evidence. In the automotive and aviation industries, safety cases have a critical role in the certification process and their maintenance is required throughout a system’s lifecycle. Safety-case-based certification is typically handled manually and the increase in scale and complexity of modern systems renders it impractical and error prone.Several contemporary safety standards have adopted a safety-related framework that revolves around a concept of generic safety requirements, known as Safety Integrity Levels (SILs). Following these guidelines, safety can be justified through satisfaction of SILs. Careful examination of these standards suggests that despite the noticeable differences, there are converging aspects. This thesis elicits the common elements found in safety standards and defines a pattern for the development of safety cases for cross-sector application. It also establishes a metamodel that connects parts of the safety case with the target system architecture and model-based safety analysis methods. This enables the semi- automatic construction and maintenance of safety arguments that help mitigate problems related to manual approaches. Specifically, the proposed metamodel incorporates system modelling, failure information, model-based safety analysis and optimisation techniques to allocate requirements in the form of SILs. The system architecture and the allocated requirements along with a user-defined safety argument pattern, which describes the target argument structure, enable the instantiation algorithm to automatically generate the corresponding safety argument. The idea behind model-connected safety cases stemmed from a critical literature review on safety standards and practices related to safety cases. The thesis presents the method, and implemented framework, in detail and showcases the different phases and outcomes via a simple example. It then applies the method on a case study based on the Boeing 787’s brake system and evaluates the resulting argument against certain criteria, such as scalability. Finally, contributions compared to traditional approaches are laid out

    Simulation-based testing of highly configurable cyber-physical systems: automation, optimization and debugging

    Get PDF
    Sistema Ziber-Fisikoek sistema ziber digitalak sistema fisikoekin uztartzen dituzte. Sistema hauen aldakortasuna handitzen ari da erabiltzaileen hainbat behar betetzeko. Ondorioz, sistema ziber-fisikoa aldakorrak edota produktu lerroak ari dira garatzen eta sistema hauek milaka edo milioika konfiguraziotan konfiguratu daitezke. Sistema ziber-fisiko aldakorren test eta balidazioa prozesua garestia da, batez ere probatu beharreko konfigurazio kopuruaren ondorioz. Konfigurazio kopuru altuak sistemaren prototipo bat erabiltzea ezinezkoa egiten du. Horregatik, sistema ziber-fisiko aldagarriak simulazio modeloak erabilita probatzen dira. Hala ere, simulazio bidez sistema ziber-fisikoak probatzea erronka izaten jarraitzen du. Hasteko, simulazio denbora altua izaten da normalki, software-az aparte, sistema fisikoa simulatu behar delako. Sistema fisiko hau normalean modelo matematiko konplexuen bitartez modelatzen da, konputazionalki garestia delarik. Jarraitzeko, sistema ziber-fisikoek ingeniaritzaren domeinu ezberdinak dituzte tartean, adibidez mekanika edo elektronika. Domeinu bakoitzak bere simulazio erremienta erabiltzen du, eta erremienta guzti hauek interkonektatzeko ko-simulazioa erabiltzen da. Nahiz eta ko-simulazioa abantaila bat izan ematen duen flexibilitateagatik, simulagailu ezberdinen erabilerak simulazio denbora handiagotzen du. Azkenik, sistema ziber-fisikoak simulaziopean probatzean, probak maila ezberdinetan egin behar dira (adb., Model, Software eta Hardware-in-the-Loop mailak), eta honek, proba-kasuak exekutatzeko denbora handitzen du. Tesi honen helburua sistema ziber-fisiko aldakorren test jardunbideak hobetzea da, horretarako automatizazio, optimizazio eta arazketa metodoak proposatzen ditu. Automatizazioari dagokionez, lehenengo, erremienta-bidezko metodologia bat proposatzen da. Metodologia hau test sistema instantziak automatikoki sortzeko gai da, test sistema hauek sistema ziber-fisiko aldagarrien konfigurazioak automatikoki probatzeko gai dira (adb., test orakuluen bitartez). Bigarren, test frogak automatikoki sortzeko planteamendu bat proposatzen da helburu anitzeko bilaketa algoritmoak erabilita. Optimizazioari dagokionez, test frogen aukeraketarako planteamendu bat eta test frogen priorizaziorako beste planteamendu bat proposatzen dira, biak bilaketa alix goritmoak erabiliz, sistema ziber-fisiko aldakorrak test maila ezberdinetan probatzeko helburuarekin. Arazketari dagokionez, “espektroan oinarritutako falten lokalizazioa” izeneko teknika bat produktu lerroen testuingurura adaptatu da, eta faltak isolatzeko metodo bat proposatzen da. Honek, falta ezberdinak lokalizatzea errezten du ez bakarrik sistema ziber-fisiko aldakorretan, baizik eta edozein produktu lerrotan non “feature model” delako modeloak erabiltzen diren aldakortasuna kudeatzeko.Los sistemas cyber-físicos (CPSs) integran tecnologías digitales con procesos físicos. La variabilidad de estos sistemas está creciendo para responder a la demanda de diferentes clientes. Como consecuencia de ello, los CPSs están volviéndose configurables e incluso líneas de producto, lo que significa que pueden ser configurados en miles y millones de configuraciones. El testeo de sistemas cyber-físicos configurables es un proceso costoso, en general debido a la cantidad de configuraciones que han de ser testeadas. El número de configuraciones a testear hace imposible el uso de un prototipo del sistema. Por ello, los sistemas CPSs configurables están siendo testeadas utilizando modelos de simulación. Sin embargo, el testeo de sistemas cyber-físicos bajo simulación sigue siendo un reto. Primero, el tiempo de simulación es normalmente largo, ya que, además del software, la capa física del CPS ha de ser testeada. Esta capa física es típicamente modelada con modelos matemáticos complejos, lo cual es computacionalmente caro. Segundo, los sistemas cyber-físicos implican el uso de diferentes dominios de la ingeniería, como por ejemplo la mecánica o la electrónica. Por ello, para interconectar diferentes herramientas de modelado y simulación hace falta el uso de la co-simulación. A pesar de que la co-simulación es una ventaja en términos de flexibilidad para los ingenieros, el uso de diferentes simuladores hace que el tiempo de simulación sea más largo. Por último, al testear sistemas cyberfísicos haciendo uso de simulación, existen diferentes niveles (p.ej., Model, Software y Hardware-in-the-Loop), lo cual incrementa el tiempo para ejecutar casos de test. Esta tesis tiene como objetivo avanzar en la práctica actual del testeo de sistemas cyber-físicos configurables, proponiendo métodos para la automatización, optimización y depuración. En cuanto a la automatización, primero, se propone una metodología soportada por una herramienta para generar automáticamente instancias de sistemas de test que permiten testear automáticamente configuraciones del sistema CPS configurable (p.ej., haciendo uso de oráculos de test). Segundo, se propone un enfoque para generación de casos de test basado en algoritmos de búsqueda multiobjetivo, los cuales generan un conjunto de casos de test. En cuanto a la optimización, se propone un enfoque para selección y otro para priorización de casos de test, ambos basados en algoritmos de búsqueda, de cara a testear eficientemente sistemas cyberfísicos configurables en diferentes niveles de test. En cuanto a la depuración, se adapta una técnica llamada “Localización de Fallos Basada en Espectro” al contexto de líneas de productos y proponemos un método de aislamiento de fallos. Esto permite localizar bugs no solo en sistemas cyber-físicos configurables sino también en cualquier línea de producto donde se utilicen modelos de características para gestionar la variabilidad.Cyber-Physical Systems (CPSs) integrate digital cyber technologies with physical processes. The variability of these systems is increasing in order to give solution to the different customers demands. As a result, CPSs are becoming configurable or even product lines, which means that they can be set into thousands or millions of configurations. Testing configurable CPSs is a time consuming process, mainly due to the large amount of configurations that need to be tested. The large amount of configurations that need to be tested makes it infeasible to use a prototype of the system. As a result, configurable CPSs are being tested using simulation. However, testing CPSs under simulation is still challenging. First, the simulation time is usually long, since apart of the software, the physical layer needs to be simulated. This physical layer is typically modeled with complex mathematical models, which is computationally very costly. Second, CPSs involve different domains, such as, mechanical and electrical. Engineers of different domains typically employ different tools for modeling their subsystems. As a result, co-simulation is being employed to interconnect different modeling and simulation tools. Despite co-simulation being an advantage in terms of engineers flexibility, the use of different simulation tools makes the simulation time longer. Lastly, when testing CPSs employing simulation, different test levels exist (i.e., Model, Software and Hardware-in-the-Loop), what increases the time for executing test cases. This thesis aims at advancing the current practice on testing configurable CPSs by proposing methods for automation, optimization and debugging. Regarding automation, first, we propose a tool supported methodology to automatically generate test system instances that permit automatically testing configurations of the configurable CPS (e.g., by employing test oracles). Second, we propose a test case generation approach based on multi-objective search algorithms that generate cost-effective test suites. As for optimization, we propose a test case selection and a test case prioritization approach, both of them based on search algorithms, to cost-effectively test configurable CPSs at different test levels. Regarding debugging, we adapt a technique named Spectrum-Based Fault Localization to the product line engineering context and propose a fault isolation method. This permits localizing bugs not only in configurable CPSs but also in any product line where feature models are employed to model variability

    Improving Adaptiveness of AUTOSAR Embedded Applications

    No full text
    International audienceAUTOSAR (AUTomotive Open System Architecture) is the most recent standard for automotive embedded systems. A major drawback of AUTOSAR lies in its lack of flexibility. Software-wise, ECUs (Electronic Control Unit) are tested, validated and uploaded; these three steps are performed in a monolithic process. Adding adaptability features into the standard is a major challenge in this context, and may result in consequent savings of time and money. On-line adaptation allows the inclusion of new functionalities in an efficient way. In this paper, we first extract relevant features from AUTOSAR for allowing dynamic software updates. Then, we define our approach for performing updates, and provide an evaluation of our approach on a RISC micro controller

    Improving Adaptiveness of AUTOSAR Embedded Applications

    No full text
    International audienceAUTOSAR (AUTomotive Open System Architecture) is the most recent standard for automotive embedded systems. A major drawback of AUTOSAR lies in its lack of flexibility. Software-wise, ECUs (Electronic Control Unit) are tested, validated and uploaded; these three steps are performed in a monolithic process. Adding adaptability features into the standard is a major challenge in this context, and may result in consequent savings of time and money. On-line adaptation allows the inclusion of new functionalities in an efficient way. In this paper, we first extract relevant features from AUTOSAR for allowing dynamic software updates. Then, we define our approach for performing updates, and provide an evaluation of our approach on a RISC micro controller

    Improving adaptiveness of AUTOSAR embedded applications

    No full text
    corecore